ADAS: the main trends in recognition technologies

Posted in September 2019.



image



Cruise's test car that rolled off GM's production line is equipped with sensors (marked in red).



The automotive industry is still looking for reliable recognition technology that will work in all conditions - night, fog, rain, snow, ice, etc.



The takeaway from AutoSens 2019 here last week was that there is no shortage of technological innovation. Technology developers, Tier-1s and OEMs are still pursuing the goal of creating “reliable” recognition technology that can work in all road conditions - including at night, in fog, in the rain, in snow, on ice, on the road with spilled oil. etc.



Despite the fact that there is still no silver bullet in the automotive industry that can solve all the problems at once, a number of companies presented their recognition technologies and concepts for new products.



This year's AutoSens show in Brussels focused more on driver assistance systems (ADAS) rather than self-driving cars.



The engineering community has reached a certain consensus. Many acknowledge that there is a wide gap between what is possible today and the prospect of commercial, autonomous vehicles with artificial intelligence that do not require a human driver.



To be clear, nobody is saying that unmanned vehicles are impossible. However, Phil Magney, founder and director of VSI Labs, believes that “Level 4 self-driving cars will work in extremely limited operational design (ODD) areas. The designs of these machines will be developed on the basis of comprehensive and detailed safety requirements "



Magny clarified that by" restricted areas "he means restrictions in terms of road and lane selection, operating time, weather conditions, time of day, exit and stop points, and so on. Further.



Bart Selman, a computer science professor at Cornell University specializing in AI, was asked if an AI-driven car could ever reason with “common sense” (being aware of the driving process and understanding the context)? Selman answered at the closing of the conference: "We will come to this at least in 10 years ... and maybe in 20-30 years."



Meanwhile, developers of ADAS systems and highly automated vehicles are competing to develop vision systems for vehicles.



Fung Kupopman, CTO at Edge Case Research and professor at Carnegie Mellon University, believes that the basis of any highly automated vehicle is a "sensing" system that can determine the position of various objects around the vehicle. He clarified that the weakness of unmanned vehicles lies in the inability to predict - to understand the context and to predict where this or that captured object might move.



Promotion of smart systems



A new trend emerging at the conference was the emergence of a number of intelligent systems. Many manufacturers add AI systems to their products by incorporating them into their sensor combinations (RGB camera + NIR; RGB + SWIR; RGB + lidar; RGB + radar).



However, there is no consensus among industry players on achieving the industry's goals. Some believe the path to success is through sensor combinations, while others (such as Waymo) lean toward processing sensor data on the central processor.



There are also many new monitoring systems on AutoSens that must be developed to meet Euro NCAP - the set of requirements for driver monitoring systems and the main safety standard in 2020. In particular, we are talking about systems that monitor not only the driver, but also the passengers and other objects inside the car.



An example is On Semiconductor's new RGB-IR sensor, equipped with Ambarella's RGB-IR video processing chip and Eyeris scene recognition software.



NIR vs SWIR



The need to see in the dark (both inside and outside the vehicle) indicates the need for IR.



While On Semiconductor's RGB-IR image sensor works with near-infrared (NIR) radiation, Trieye, which also attended the show, went further with the introduction of the SWIR (shortwave infrared) camera. ).



image



Among the advantages of SWIR cameras is the ability to see objects in any weather / light conditions. More importantly, SWIR can proactively identify road hazards (such as ice) by detecting a unique spectral response based on the chemical and physical characteristics of each material.



However, the use of SWIR cameras is limited to military, scientific and aerospace applications due to the extremely high cost of indium gallium arsenide (InGaAs) used in this technology. Trieye claims to have found a way to create SWIR cameras using CMOS technology. “We have made a breakthrough. Like semiconductors, we have been using CMOS for high volume production of SWIR cameras since the early days, ”says Avi Bakal, CEO and co-founder of Trieye. Bacal says that unlike the $ 8,000 Gallium Arsenide sensor, the Trieye camera will be offered "for tens of dollars."



image



Lack of labeled data



One of the biggest challenges in AI is the lack of data for training samples. More precisely, "labeled training data," Magny said. “A model is only as good as the data and the way it is collected. Of course, training data must be tagged with metadata, and the tagging process takes a very long time. "



There was a lively discussion at AutoSens about generative adversarial neural networks (GANs). In the GAN, two neural networks compete to create new data, Magny said. Having received a training sample as input, such models are trained to generate new data, the statistical indicators of which will coincide with the original ones.



Drive.ai, for example, use deep learning to improve automation of data markup, thus speeding up the tedious markup process.



In a lecture at AutoSens, Koopman also touched on the problem of accurately annotating data. He suspects that much of the data remains untagged because only large companies can afford to get it right.



Indeed, the AI ​​startups at the show acknowledged that paid annotations for data from third parties hurt them a lot.



One way to solve this problem is GAN. Edge Case Research offers another way to accelerate the development of secure recognition software without tagging data. The company recently announced the Hologram, a tool for stress testing perception systems and risk analysis. According to Koopman, instead of marking petabytes of data, you can simply run them twice - Hologram will provide information about suspicious parts of the dataset and tell you what is best to do - expand the training set or retrain your model.



Also discussed at the conference was the issue of tagged datasets - what if the car OEM replaces the camera and sensors used for training and data manipulation?



David Tokich, VP of Marketing and Strategic Partnerships at Algolux, told the EE Times that the engineers working on ADAS and unmanned vehicles are concerned about two things: 1) the reliability of the recognition systems in various conditions and 2) the development of accurate and scalable solutions for Computer vision tasks The



camera systems used in ADAS and unmanned vehicles can differ significantly from each other. They all have different parameters depending on lenses (different lenses provide different viewing angles), sensors and signal processing technologies. A tech company selects one of the camera systems, collects a large dataset, tags it up, and trains its model to be tuned for use with a specific system.



But what happens when an OEM replaces a camera that was used with a specific dataset? This change could affect perception accuracy because a machine learning model tuned to a specific camera now has to deal with a new set of raw data.



Does this require the OEM to train their models over and over again on new datasets?



image



Tesla, Waymo, GM / Cruise use a variety of cameras in their self-driving vehicles.



When asked about the possibility of replacing the image sensors, Magny of VSI Labs said: "I don't think this will work - unless the specifications remain the same." He also added: “At VSI, we trained a neural network to work with a FLIR thermal imaging camera, and the characteristics of the images in the training set matched the characteristics of the camera for which the neural network was trained. Later we replaced the sensors, but the technical specifications remained the same. "



Algolux, however, claims that the new technology for translating previously created datasets should be available "within a few days." According to Tokić, the Atlas Camera Optimization Suite solves this problem by taking “baseline data” (camera and sensor characteristics) - and applying it to the recognition layers. “Our challenge is to democratize camera choices” for OEMs, Tokić said.



AI hardware



Over the past few years, many startups have sprung up in the field of processors for AI. This created momentum that prompted some to announce a resurgence of the hardware market. Many startups developing chips for AI cite the markets for autonomous vehicles and ADAS as their target markets.



Ceva, in particular, unveiled at the AutoSens conference a new core for AI and 'Invite API' - products aimed at the accelerator market for intelligent systems.



Curiously, the new generation of multifunctional cars has yet to implement modern AI chips - with the exception of chips from Nvidia and Intel / Mobileye, as well as full autopilot chips developed by Tesla for internal use.



On the other hand, On Semiconductor announced at the AutoSens conference that its team (and the Eyeris team) will use Ambarella's systems-on-a-chip as AI processors to monitor various metrics in vehicles.



Modar Alawi, CEO of Eyeris, said: “We couldn't find a single AI chip that could handle 10 neural networks, consume less than 5 watts, and capture video at 30 frames per second using up to six cameras located inside car ".



Allawi admitted that Ambarella is not a well-known manufacturer of AI chips (they are better known for making chips for video compression and computer vision). However, Ambarella's CV2AQ system meets all of their requirements, he said, outperforming all other accelerators.



Allawi hopes his company's AI software will be ported to three other hardware platforms in time for the Consumer Electronics Show in Las Vegas in January 2020.



image



On Semi, Ambarella and Eyeris are demonstrating a new in-cab monitoring system using three RGB-IR cameras.



At the same time, On Semi stressed that driver and passenger monitoring systems require "the ability to capture images under a variety of lighting conditions, from direct sunlight to darkness." The company claims that thanks to its good near-infrared response time, "RGB-IR technology in the CMOS image sensor delivers Full HD 1080p image output using 3-exposure HDR and backlighting (BSI) at 3.0 ÎĽm." Sensors that are sensitive to RGB and IR illumination can capture color images in daylight and monochrome IR images with illuminated near-IR spectra.



Going beyond driver monitoring systems



Allawi is proud that Eyeris AI software can perform comprehensive body and face analysis, passenger activity monitoring and object detection. In addition to observing the driver, “we are monitoring everything inside the car, including the surfaces of the seats and the steering wheel,” he added, stressing that the startup is already engaged in more than “searching for cars in the video stream”.



Laurent Emmerich, director of European customer solutions at Seeing Machines, however, pleaded not to stop there. “Going beyond observing the driver and keeping track of a lot of objects is a natural evolution,” he said. "We are also looking to expand."



Compared to start-ups, Seeing Machines' advantage lies in “a solid foundation in computer vision and AI experience gained over the past 20 years,” he added. The company's driver monitoring system is currently used by “six car manufacturers and is broken down into nine . programs "



In addition, Seeing Machines noted that it has also developed its own chip for monitoring drivers -. Fovio asked whether the chip can also provide work for future car monitoring systems, Emmerich explained that their chip will be used in the configurable hardware platform ...



Reservation



Combining different sensors and installing them in a car is necessary not only to improve perception, but also to add much-needed redundancy for safety.



image



Outsight box featured on AutoSens.



Outsight, a startup co-founded by former Whitings CEO Cedric Hutchings, is showcasing a new highly integrated multi-sensor system at AutoSens. He explained that Outsight's sensor unit was designed to "provide meaningful recognition and localization of objects with an understanding of the context of the environment - including snow, ice and oil on the road." He also added: "We can even classify materials on the road using active hyperspectral sensing."



When asked whose sensors are used in the Outsight Box, he refrained from comment. "We are not announcing our key partners at this time, as we are still working on specifications and applications."



The EE Times spoke with Trieye that Outsight will be using a Trieye SWIR camera. Outsight is promoting its sensor block due to be released for testing in Q1 2020. The Outsight Box is intended to be a complementary autonomous system providing data “unrelated to other sensors” for security and “true redundancy,” Hutchings explained.



Outsight Box does not use machine learning technologies, and therefore the results of its work are predictable, and the system itself can be "certified"



For the UAV and ADAS markets, Aeye also developed iDAR, a solid-state MEMS lidar coupled with a high-definition camera. By combining two sensors and embedded AI, the real-time system can “solve a number of edge problems,” says Aravind Ratnam, vice president of product management at AEye.



The iDAR system is designed to combine 2D camera “pixels” (RGB) and 3D lidar data “voxels” (XYZ) to generate a new type of data in real time. The company explained that this new data type should provide greater accuracy, range, and the data itself should be more understandable for routing systems used in unmanned vehicles.



image



Product Features of AEye AE110 versus industry benchmarks and capabilities.



In his presentation, Ratnam said that AEye is exploring a variety of applications. “We looked at 300 scenarios, picked 56 that matched and narrowed it down to 20,” in which the fusion of camera, lidar, and artificial intelligence makes sense.



Ratnam showed a scene in which a small child from nowhere chases a ball into the street - right in front of the car. The lidar camera system works much faster, reducing vehicle response times. Ratnam commented, "Our iDAR platform can provide very high speed computing."



Asked about the benefits of combining sensors, one Waymo engineer told the EE Times at the conference that he was not sure if this would make a big difference. He asked: “The difference will be in microseconds? I'm not sure".



AEye is confident in the benefits that their iDAR can bring. AEye's Ratnam noted the close collaboration with Hella and LG and stressed, “We've been able to bring down the cost of iDAR. We are now offering 3D lidar at the ADAS price. ”



In the next 3-6 months, AEye will finish work on an automotive-grade system that combines RGB, lidar and AI algorithms. Ratnam claims their solution will cost less than $ 1,000.



image



Sales of lidar systems for the automotive industry (Source: IHS Markit)



Dexin Chen, Senior Analyst for Automotive Semiconductors and Sensors at IHS Markit, told conference attendees that lidar vendors were "ahead of the market and promised too much." He noted that in the future, the physical characteristics of lidars (which are their advantage) may affect the market, but their commercialization will decide everything. The market desperately needs "standardization, alliances and partnerships, and supply chain management and AI partnerships."



Subscribe to the channels:

@TeslaHackers - a community of Russian Tesla hackers, rental and drift training on Tesla

@AutomotiveRu - auto industry news, hardware and driving psychology







image



About ITELMA
- automotive . 2500 , 650 .



, , . ( 30, ), -, -, - (DSP-) .



, . , , , . , automotive. , , .


Read more helpful articles:






All Articles