This article was written by inderstand.ai and first published on blog.understand.ai.
Currently, there is a big shift in the automotive industry. Not only the shift from internal combustion engines to electrified powertrains or the shift from owning a car to shared mobility services but also the transition from manual assisted driving to (partially) automated driving. For the development of autonomous driving functions, the whole development process needs to be transformed as well, because the test space for these kinds of functions is incredibly large.
In this regard, we are talking about 8 billion to 10 billion miles that have to be driven without a fatality to prove that an autonomous vehicle is safer than the average human driver regarding that measure. Take a moment to think about the enormous complexities of scenarios you can encounter in 8 billion miles. What could go wrong and thus needs to be tested?
It is not feasible for most companies to capture this high complexity and perform this large amount of testing on real roads only. That is why simulation will play a crucial role in the development cycle, but most likely even more so in the testing and validation of autonomous driving functions. To enable this kind of front-loading process, in which bugs can be found much earlier in the development process, realistic test scenarios are needed to challenge the driving functions in the same way they would be challenged during real-world driving tests. The current process of creating these kinds of scenarios by hand with different kinds of editors has multiple drawbacks:
Firstly, this approach does not scale, because it is more or less completely manual and it is limited to the provided objects and assets from the editor.
Secondly, the resulting scenarios are subjective: Scenarios are imagined and re-enacted in a rule-of-thumb estimated way. This lacks the fine, minute details of human behavior, movement and negotiation and results in coarse, scenarios with a low-dimensional semantic.
Thirdly, the fictional scenarios are not realistic: If an engineer or designer thinks of a critical scenario, he or she often thinks of one critical behavior like a slow-reacting driver. But in reality, a critical situation is often the result of multiple co-occurring critical behaviors and circumstances like a slow-reacting driver, because he/she is watching his smartphone, a child running onto the street chasing a ball on a wet street surface which increases the braking distance.
If you generate insights about your algorithm through the simulation of these invented scenarios, you cannot transfer them to the real-world driving, if the scenarios are not realistic enough. The real-world must be the benchmark.
The Scenario Generation Process
1. Scenario Identification: The process starts with the identification of interesting parts in the petabytes of data recordings which different OEMs and Tier1s collected during their real-world test drives. These interesting scenarios can be overtaking maneuvers, unprotected left-turns, crossing of pedestrians in front of the car, etc. In general, scenes in which the automated driving function or parts of it, like perception or planning, are faced with challenges. Challenges from the right scenarios.
2. Scenario Extraction: After this identification of the target scenarios, understand.ai leverages their pipeline for data enrichment to extract the relevant meta-data from raw data. It is important to extract and localize the different objects (vehicles, pedestrians, …) including their class, trajectory etc. precisely from the recorded sensor data to account for an accurate semantic and criticality of the scenario. These extracted and localized trajectories are then transformed into a scenario which can be simulated in the simulation environment. This so-called “replay” scenario reflects the originally recorded scenes in a very accurate manner. Over time, a scenario database of the right scenarios at the right quality is build.
3. Scenario Fuzzing: To generate variations from it, we abstract the scenarios to so-called “logical” scenarios. That means, that the trajectories are not represented as points over time as recorded by the sensor, but they are now represented as distinct maneuvers, that are performed by the respective traffic participant, like a car, a bicyclist or a pedestrian. These maneuvers are then parameterized to allow large scale scenario-based testing locally or in the cloud to cover a larger part of the huge test space even in the virtual test environment. But not only the maneuvers itself are parameterized, also the environmental conditions, like the road, the weather and other factors that have an influence on the driving algorithm can be changed to challenge the function in different ways. For this purpose, it is important to identify relevant, realistic and meaningful parameter ranges. This is an art in itself. We will discuss it in upcoming blog articles.
This kind of development process based on real-world simulation scenarios improves the confidence in deploying safer algorithms to the vehicles. On top of that it helps with the validation and homologation efforts for autonomous vehicles by reducing the 8 billion miles mentioned before to several thousands of relevant test scenarios.
In the end, not the number of miles driven matter, but the number of relevant miles driven, either in simulation, if the scenarios are realistic enough, or in the real world.
This development process could therefore also improve the acceptance of autonomous vehicles by the public, when the used scenarios are made public and are accepted by the public. As part of the Validation and Verification Consortium of the German Automotive Industry (VDA), we support by contributing extracted scenarios to a public scenario database. So our roads will be safer, one scenario at a time.
What is needed to successfully use scenarios? — A pipeline to create the right scenarios with the right variations and right parameter ranges for a complete and rigorous coverage of the test cases. And of course, the 3-Step Process illustrated below:
We at understand.ai together with dSPACE support the pipeline to generate real-world test scenarios in a highly automated way. These scenarios are used to test and validate the developed functions under realistic conditions in the virtual simulation environment. They are derived from real-world sensor measurements, which allow our customers to replay the situations, which they encountered during their test-drives in their simulation environment to check if updated versions of their driving algorithms perform better or worse. In the end, challenging the driving functions early in the development process with the right scenarios and the right variations and parameters is key to measurable progress in autonomous driving functions.
understand.ai provides the high-quality training and validation data to enable mobility companies to develop with confidence computer vision and machine learning models that reliably and safely power autonomous vehicles. Our advanced capabilities include segmentation and bounding box annotations for LiDAR and video, with specific meta-attributes and instance IDs for object tracking. By applying specialized AI technology and continuous quality improvement to accurately and quickly annotate millions of images, we accelerate production of the ground-truth required to make autonomous driving a reality. understand.ai is headquartered in Karlsruhe, Germany and has offices in Berlin and San Francisco. Our engineers have relevant experience gained at innovative companies such as BMW, Google, Mercedes-Benz.
TECH.AD is happy to have understand.ai as Global Partner in 2020!
To learn more about understand.ai, join their session at the TECH.AD Berlin
Business Case – Scenario-driven development 2.0
on March 02, 2020 at 11:05 AM