Revamp Driver Assistance with Cutting-Edge SoC Integration

Revamp Driver Assistance with Cutting-Edge SoC Integration

Aaron Behman from Xilinx and consultant Adam Taylor talk about important factors for advanced driver assistance systems (ADAS) and the benefits of programmable system-on-chips (SoCs).

Improved processing power and the rise of CMOS image sensors and other sensor technologies have enabled car makers to bring in advanced driver assistance systems (ADAS). ADAS helps drivers stay aware of their surroundings and lowers the risk of accidents. Some systems can also keep an eye on the driver and warn them if they seem drowsy, for example.

More and more, ADAS not only supports drivers with tasks like parking assist, lane assist, and adaptive cruise control but also feeds information to autonomous driving systems.

It’s no wonder the ADAS market is expected to hit $4 billion a year by 2021, growing at a rate of 10% annually.

ADAS uses a wide range of sensors, including embedded vision, radar, and lidar. Often, to get the needed information, they combine data from multiple sensors—a method called sensor fusion. In the realm of embedded vision, ADAS can be split into two groups: external monitoring and internal monitoring. External systems cover things like lane departure, object detection, blind spot detection, and traffic sign recognition, while internal systems check for driver drowsiness and eye movement. Both types come with challenges, especially in creating the necessary image processing algorithms and meeting automotive standards.

Many ADAS applications need sensor fusion, which significantly ups the required processing power. Sensor fusion can use either similar types of sensors (homogeneous) or different types (heterogeneous).

A lot of these applications use all-programmable SoCs or FPGAs because of their flexibility in both implementing algorithms and interfacing with various sensor types and networks.

Performance aside, ADAS applications also face several other hurdles. Car manufacturers have strict pollution standards, making the weight and power consumption of these systems very important. Also, cost is critical since these systems are produced in large quantities. Security and safety are crucial too, governed by strict standards, and using SoCs or FPGAs can help meet some of these requirements.

System Architecture

Developing embedded vision systems that monitor both external and internal cameras is one of the more complex ADAS implementations. This system needs to connect to several cameras around the vehicle, process the images, and share the information with the car’s occupants.

Many camera systems use point-to-point LVDS wiring to transfer data, but this increases costs and adds weight due to the extra cabling needed. Other approaches are gaining popularity, such as pushing some functionality into the camera itself. If the image from the camera is compressed instead of raw, network-based architectures become possible. These networks can use commonly used automotive buses like:

– MOST (Media Oriented Systems Transport): A high-speed network that can be either optical or electrical.
– IDB-1394: A high-speed network implemented over an electrical physical layer in a daisy-chained setup.
– Ethernet AVB (audio visual bridging): Allows routing image data and other data around the vehicle as needed.

When using a network, the system architect must ensure there’s enough bandwidth available to handle the data efficiently.

smartautotrends