27 August 2021
Ioannis Souflas
European R&D Centre,
Hitachi Europe, Ltd.
Artificial Intelligence (AI) has a central role in enabling Automated Vehicles (AV). However, it is imperative that AI is used in such a way that ensures the safety and improves the Quality of Life (QoL) of its users. In our blog post last year, we summarized some of our research activities in the HumanDrive project related to data-driven solutions for AVs. In this blog post, we are looking at our joint research activity with the University of Leeds, as part of HumanDrive, on how humans can help to build safe AI for AVs with Human-in-the-Loop simulation [1] (Figure 1).
Figure 1: Human-in-the-Loop simulation process as applied to AVs.
Realistic simulation environments help automated driving software developers to virtually verify the “Safety of the Intended Functionalities” (SOTIF) [2]. Specifically, data-driven systems and complex algorithms need to be tested in a wide variety of conditions to ensure that they operate safely and within acceptable human-like behaviors. Testing every possible scenario in real-world environments would require an unrealistic amount of time and resources. Simulation can help to solve this problem.
Our participation in the HumanDrive project allowed us to experiment with the University of Leeds Driving Simulator (Figure 2). This facility along with the expertise of the Human Factors and Safety research group of the University of Leeds helped us to explore how Human-in-the-Loop simulation can be used to improve the safety of AI-based solutions for AVs.
Figure 2: The University of Leeds driving simulator.
One of the key features of the University of Leeds driving simulator is the ability to accurately emulate the vehicle dynamics providing a more realistic driving experience. Additionally, another activity that was important to improve the validity of this study was the creation of a digital-twin virtual environment of real urban and rural locations in the UK (Figure 3). This helped us to bridge the gap between real and virtual worlds and provided a more immersive experience to the participants of the experiments.
Figure 3: Visual realism level of the virtual environment (right) compared to real-world (left).
Our AI-assisted Automated Driving System (ADS) was connected to the University of Leeds driving simulator using the Robot Operating System (ROS). The AI-assisted ADS received simulated inputs about the surrounding environment e.g. road network, location of vehicles, pedestrians etc., and vehicle dynamics information such as lateral and longitudinal acceleration. This information was then used by our specialized AI responsible for path planning and control to decide the best course of action and adjust the steering, acceleration and deceleration demands accordingly (Figure 4).
Figure 4: Simplified connection diagram between the AI-assisted ADS and the driving simulator.
Several participants were invited to take part in this study. The experiment started with each participant driving the vehicle simulator in manual mode. Next, our AI-assisted ADS took control of the driving commands and the participants assessed the driving style. Each candidate was asked to complete questionnaires both at the start and the end of each experiment, as well as give verbal feedback. This process helped us to refine the driving behavior of the AI and ensure that each passenger had a safe and comfortable journey. The following charts (Figure 5) give a simple quantitative picture on the performance of the refined AI in terms of speed and yaw rate errors with respect to the human drivers. Note that the errors follow a normal distribution with mean close to zero and mostly constrained within two sigma (95th percentile). For more information we encourage you to read our paper [1].
Figure 5: Error distribution of the AI with respect to the human drivers.
This study helped us to understand that Human-in-the-Loop simulation can be a valuable tool in the development of safe AI-based solutions for AVs. Specifically, we learned that keeping humans in the loop has two key benefits: (1) the AI gets to learn what is commonly acceptable by humans, (2) humans start to build trust in the AI as they see how it learns and improves in a controlled and safe testing environment.
Many thanks to the University of Leeds academic staff (Albert Solernou, Richard Romano, Foroogh Hajiseyedjavadi, Evangelos Paschalidis, Natasha Merat) for sharing their expertise.