18-Digital Twin and Augmented Reality for Automated Vehicles
Date and Time: Tuesday, July 30, 2024: 5:00 PM - 6:30 PM
Location: Indigo BC
Junfeng Zhao
Assistant Professor, Arizona State University
Presentation Description
Relevance to automated road transport – Automated vehicles (AVs) possess the potential to greatly enhance vehicle safety. However, the safety must be validated and ensuring safety requires a validated verification and validation (V&V) process that will inevitably include a significant testing component, including on-road testing in a variety of operating conditions. Gathering sufficient data to complete the required on-road testing, for example by operating a fleet of prototype AVs in various locations, can present a significant barrier in terms of both time and money. The relevance of this work to automated road transport is the need to develop tools and methodologies to accelerate the data collection and provide methodology validation of the testing component of the AV V&V process.
The purpose of the work being reported – The Science Foundation Arizona and Arizona State University are developing the Automated Vehicle Test and Evaluation Process (AV-TEP), which is a safety case-based V&V framework that can serve as the basis for evaluating the vehicle throughout its development process. The AV-TEP framework includes a Testing Pillar, which includes simulation, closed-course, and on-road testing. As part of the validation of the Testing Pillar methodology, ASU’s Battery Electric & Intelligent Vehicle (BELIV) Lab is developing a research AV platform with advanced tools to allow for accelerated data gathering in a variety of operating conditions. These tools include digital twin (DT) and augmented reality (AR) technology. The combination of DT and AR allows the BELIV research AV to be tested in complex scenarios with a diversity of simulated road users in a safe manner but also with the high fidelity of a real-world vehicle under test.
The methodology used – We provide a series of steps to enable DT and AR in the research AV platform. The benefit of our methodology lies in its nature of being open source and thus, could be replicated easily and cost effectively. We integrate Autoware’s self-driving software stack with the CARLA simulator to achieve this task. CARLA provides a high-fidelity, open source virtual environment that fits the needs of AR. Furthermore, it provides a CARLA-ROS bridging interface to integrate Autoware’s ROS2-based software stack with CARLA in an easier fashion. The first step consists of creating a DT of the AV. This is developed in a 3D CAD software called Blender. CARLA provides a well-documented procedure to import it into the Unreal Engine. We also create a DT of the test area. Both the HD map for Autoware and a virtual map for CARLA are designed based on a point cloud map generated using the SLAM technique. Additional information such as elevation data and aerial imagery are used to create the CARLA map. We also took measurements of the physical lanes to improve the accuracy of the map in ScenarioRunner. We inject virtual objects into Autoware’s perception pipeline to enable the AR feature. CARLA provides the ground truth data of the virtual objects. These objects are converted into a Autoware-compliant format and fed into a merger node. The merger node asynchronously combines the real-world objects detected by Autoware and virtual objects from the simulator. We localize the virtual vehicle in CARLA using the GPS-RTK system from the real vehicle. CARLA provides a Python API to transform an actor’s pose with the desired pose values, in this case, the pose from the GPS-RTK. In order to synchronize both the vehicles, we leverage the high frequency (~90 Hz) of the pose topic and transform the virtual vehicle at every time step.
Key findings from the work and their significance for the implementation of automated road transportation systems – We successfully demonstrated the use of DT and AR in V&V of an AV in car-following scenarios. AR reduced the physical setup time of the experiments. We spawned numerous vehicles in the virtual environment to test the AV software stack. It only required us to use the Python API of CARLA to spawn multiple actors in the scene. This significantly reduced the time to assemble physical objects such as vehicles and pedestrians to perform the live testing. AR also allows us to test new synthetic scenarios that are not recorded previously. The author of the scenarios has complete freedom within the bounds of the scenario format to create new test cases with varying degrees of complexity, including diverse weather conditions and road users. With the help of DT technology, these scenarios can approach real-world fidelity. The use of simulation-based DTs and AR in a high-fidelity virtual environment can lower development costs and speed up AV V&V, and help AVs fulfill their potential benefits of enhanced safety, efficiency, and mobility.
Speaker Biography
Dr. Junfeng Zhao is an assistant professor at Arizona State University, where he founded the Battery Electric & Intelligent Vehicle (BELIV) Lab. He is a graduate faculty member of the Robotics & Autonomous Systems (RAS) program and the Clean Energy Systems (CES) program. His research interests include connected and automated vehicles (CAV), CAV system integration and safety assessment, motion planning and controls, cooperative perception, AI/ML in automotive and robotics systems, electrified propulsion system controls, and intelligent transportation systems. Before joining the ASU Fulton Schools of Engineering and the Polytechnic School, Dr. Zhao worked as a senior researcher at General Motors R&D. Dr. Zhao has authored over 20 journal and conference papers and holds 15 patents.
Presentation File
IP-AV Activities at ASU
Category
Invited Presenter