Simulations are critical to accelerating the development of design methods in many industries. In the field of automotive self-driving systems, simulations are traditionally used to test planning and motion control algorithms. Repeated simulations are being used to develop recognition systems in which data from traffic sensors are recorded and played back using a variety of performance testing software stacks. However, these simulations were mostly limited to scenarios found in real cars.
There is another type of simulation that is becoming more and more important - the generation of high quality artificial data that accurately convey information about real traffic situations. The problem with using exclusively road data is that huge amounts of data must be collected and mapped to get closer to the limits of recognition modules in various operational areas. In addition, recognition algorithms cease to match the available data and fail when operating outside the worked environment and under other conditions. In turn, synthetic data can be generated quickly and cheaply, and its descriptions are automatically generated using basic knowledge of the simulated environment.
Synthetic data generation problems for perceptual modules
Although the task of synthetic data modeling for sensors seems simple and obvious, it is actually very difficult. In addition to creating realistic synthetic environments for different regions (for example, San Francisco or Tokyo), modeling each type of sensor requires detailed knowledge of the basic physical properties and characteristics of various sensors used in the industry. Also, while simulations can be significantly slower for other applications than in real time, most driverless driving algorithms require near real-time performance. Thus, different levels of simulation performance and accuracy are required in different use cases.
Although significant efforts are made to model each of the sensors, experts expect that in the near future there will be a noticeable gap between real and synthetic data. Perceptual algorithms can be trained on real data from sensors and tested on synthetic data (transition from real data to synthetic) and vice versa (transition from synthetic data to real), and algorithms of different types will work in different ways. This issue is not limited to simulation data. Perceptual algorithms with a specific set of sensors trained on California roads are likely to perform worse with a different set of sensors. Also, this algorithm may not perform well when tested on roads in other regions.
Figure: Step 1 - testing perceptual systems on synthetic data
Creation of 3D synthetic environments
Many approaches to creating environments have been developed as a result of decades of work in the entertainment industry. However, there are significant differences between the self-driving and entertainment industries. While there are high demands on photorealism in both areas, there are additional requirements for autonomous vehicles environments: they must be created cheaply and quickly (while in the entertainment industry it can take months), must be extremely realistic (as for the human eye and for sensors) and variable, and should also support multiple test cases.
Usually 3D environments are created by hand - the 3D artist creates assets and places them in the created world. This approach produces photorealistic results and is great for demonstrations. However, due to its manual nature, it does not scale to create virtual regions from around the globe and does not allow for as many virtual environments as is required to test unmanned vehicles. Thus, we are faced with the limitations of virtual environments.
An alternative approach is to use real-world scanning techniques to ensure that the built environment matches its reference. The disadvantage of this method is that the data in the real world often have many errors and inaccuracies. Since the lighting is baked and it is impossible to determine its material from the surface, cameras and lidars provide only approximate data. In addition, the environment may contain gaps, incorrect descriptions, and moving objects that must be removed. Also, this method puts forward significant requirements for resources for data storage and computation, and it can also simulate only those areas that are found in real life.
A relatively new approach is the creation of virtual worlds based on procedural generation. In this way, large areas and cities can be quickly created based on a variety of input data, resulting in the creation of the world using mathematical methods (Fig. 2). This approach also allows you to specify many different environment options to prevent overfitting. Parameters such as time of day or weather can be changed as long as the annotations are accurate. In general, new maps can be created in a fraction of the time it takes to manually create virtual environments. The difficulty of this approach lies in ensuring high-quality creation of real-world objects without manual edits.
Figure: 2: high resolution procedurally generated buildings
Accurate sensor simulation
When generating synthetic data, the environments that we talked about above are used as input data for sensors. These sensors should be able to simulate lidar depth estimates, radar digital beamforming characteristics, and noise sources in cameras. At the same time, these sensors must be powerful enough to carry out software and hardware testing or work with machine learning applications that require large amounts of data.
Even though the assumption that a sensor can handle hundreds or thousands of different conditions and topologies, ultimately they must all obey the same fundamental principles of energy transfer and information theory. A well thought out sensor simulation structure can provide flexibility to the structure used in different environments. This fundamental philosophy is based on the desire to transfer the tools for the development of electro-optical systems and signal processing systems from the world of sensor design to the world of simulation and sensing technology.
Even if a system is well thought out from a theoretical point of view, it is only as valuable as it can capture the properties of its counterpart from the real world. The degree of correlation between reality and the model is highly dependent on the use cases. In simple scenarios, a simple pivot table of data may be sufficient, while in other cases a quantitative statistical assessment of various properties and characteristics may be required - this usually involves a combination of laboratory and field experiments to determine the specific properties of the sensor. Thus, simulating sensor performance (and the accuracy of that simulation) can be viewed as a science in which a benchmark is taken and then progressively degraded.
Figure: 3: simulation of a rotating lidar with 128 lasers
Efficiency and repeatability of synthetic data
There are two aspects that limit the usability of synthetic data - efficiency and repeatability. For a variety of reasons, the biggest challenge in simulating sensors for self-driving systems is the accuracy that can be achieved within real-time processing requirements. Accuracy and performance are also closely related to the scalability of synthetic sensor generations. To create a scalable solution, it becomes increasingly important to use resources in parallel.
This coordination of resources naturally brings us to the issue of repeatability. For parallelization to be beneficial, a balance must be struck between parallel and non-parallel modeling. Determinism is a key component that allows engineers to test changes to their algorithms in isolation while leveraging a variety of modeling capabilities.
Sensor simulation: adaptation to special cases
Once the methods for developing environments and sensors have been created, the next question will arise - is the obtained synthetic data sufficient for all use cases? Use cases can vary depending on the degree of software availability, from validating sensor placement using synthetic data to testing final production systems before deploying them.
Each use case has different requirements for the levels of model accuracy. These levels of precision govern the verification and validation processes. Verification describes the process of determining the conformity of the resulting model and the original specification (did we manage to create what we originally planned?). Verification is also related to the definition of determinism (are the results of the model reproduced every time under the same conditions?) In validation, the opposite is true: to determine whether the model meets the needs of the target application, the requirements of the end user are taken into account. In some cases, even using a rough approximation to the physical model underlying the sensor is acceptable. However, production test use cases require synthetic sensor models that have been tested under laboratory conditions,and in real life - this is necessary to ensure accurate compliance with the acceptable levels of uncertainty.
The problem of evaluating sensor models is also more difficult than simply checking the output signal level. While this is true for many sensory technologies in self-driving systems, the end user is also interested in making perception models work efficiently on both synthetic and real data. These models can be based on computer vision or built using a variety of machine learning and deep learning techniques. In these use cases, the sources of uncertainty are unknown (in the event that there is no full confidence in the sensor model).
Applied Intuition Approach
Applied Intuition has developed a Perception Modeling Tool from the ground up to solve the problems described above. This tool includes tools for creating large-scale environments, developing sensors with multiple levels of accuracy, and allowing testing based on use cases. Procedural environment generation is done through a unique pipeline that is flexible with regard to geographic areas, autonomous driving applications, and data sources.
- Russia's first serial control system for a dual-fuel engine with functional separation of controllers
- In a modern car, there are more lines of code than ...
- Free Online Courses in Automotive, Aerospace, Robotics and Engineering (50+)
- McKinsey: rethinking electronics software and architecture in automotive
About ITELMA