This article is the third part in a series of three articles about sensors used by the automotive industry to allow perception on autonomous vehicles.

RGB cameras, LiDARs and Radars are the three main sensors used by the automotive industry to maintain the perception for autonomous vehicles at various levels of autonomy. Each of these three different methods in ADAS Technology has advantages and disadvantages, so that the ideal system would be a combination of all three.


Radar technology may be used to complete the information coming from the other sensors. It transmits high frequency radio waves to get range, direction and velocity of objects. It works under any weather condition and at night. One of its shortcomings is that it does not identify small features. Radars are already used today to control Adaptive Cruise Control (ACC): they measure successfully the front distance to other cars and adapts to their speed.

We started by saying that each sensor’s disadvantage can be handled by a combination (fusion) of their data. Sensor fusion is achieved by one central computer that integrates all the information it receives to form a complete view (perception). For example: When LIDAR shows a set of dots at some distance, RGB camera’s image is used to identify the object using features and colors. To overcome the heavy data sets produced by the sensors, some systems today are using edge CPUs. They are located on the sensors, performing an initial process work (generally compression of data), so that the central computer can do the fusion and the analysis faster and more efficiently.

Read the first article (about RGB cameras) and the second article (about LiDARs) in this series about sensors for ADAS. RSIP Vision‘s engineers are expert in autonomous driving and ADAS technology. Call us and we will help you with your ADAS technology project.

Share The Story