Multi-sensor calibration of advanced driver assistance systems (ADAS) in autonomous vehicles ensures accurate data fusion by aligning various sensors (like cameras, LiDAR, and radar) into a unified coordinate system, enabling precise perception and localization.
This article reviews the types and techniques of multi-sensor calibration of ADAS.
ADAS sensor calibration uses a combination of intrinsic considerations for individual sensors and extrinsic system-level factors. Intrinsic calibration considers general factors, such as linearity, output slope, and offset, as well as sensor-specific specifications. Examples of sensor-specific factors include:
- Camera — focal length, lens distortion, resolution, high dynamic range (HDR), speed of lens focusing, high sensitivity, low light performance, LED flicker mitigation (to minimize the effects of traffic lights), and low latency.
- LiDAR — laser beam angle, field of view (FOV), scan rate, ranging accuracy, angular resolution, and internal coordinate system.
- Radar — antenna gain, frequency, pulse characteristics, range, FOV, resolution, speed measurement accuracy, and the ability to detect various types of objects/materials.
Extrinsic calibration examines the spatial relationship between sensors, encompassing both translation and rotation. It involves calibrating the camera, LiDAR, and radar to ensure that their coordinate systems are aligned. For example, it can validate object-level tracking and how the data is fused using a track-level fusion scheme.
Extrinsic calibration of ADAS systems can be implemented using a combination of target-based and targetless methodologies or employing only targetless techniques.
Target-based calibration
Target-based calibration, also called controlled environment or static calibration, uses targets with specific shapes and sizes at specified distances to calibrate ADAS sensor performance in a static setting.
Static ADAS calibration requires specific lighting conditions and the absence of reflective surfaces to avoid sensor confusion. Calibration targets are used to calibrate and align the sensors (Figure 1).

The highly controlled conditions for target-based ADAS calibration support high-accuracy calibrations. However, the use of a controlled environment is also a limitation, as ADAS is often operated in uncontrolled environments on roadways. As a result, target-based ADAS calibration is generally used in combination with targetless calibration.
Targetless calibration
The limitations of target-based ADAS calibration stem from the diverse types of data provided by the three sensor modalities. Cameras produce 2D images, LiDAR produces dense 3D point clouds, and radar provides sparse 4D point clouds where the fourth dimension represents the object’s speed.
Targetless calibration can be implemented by attaching a scan tool to the car’s computer and driving at specified speeds, following other vehicles, and navigating on clearly marked roads. The scan tool detects objects and road markings and uses an algorithm to calibrate the sensors based on the real-world environment.
A new targetless calibration approach has been proposed based on self-supervised learning (SSL) and deep neural networks. In this approach, one part of the signal is used to predict another part of the signal. It has been used for super-resolving a radar array, up-sampling camera frames, or lidar measurements to improve calibration results.
In Figure 2a, the camera-lidar calibration based on SSL is shown. The projected lidar point cloud (colored by range) and the camera image are clearly misaligned in the left-hand image; however, they can be aligned using SSL, as shown in the right-hand image.

Figure 2b illustrates the camera-radar calibration. Before calibration, the radar, represented by the cyan ‘+’ markers, are misaligned with the moving vehicles in the left-hand image. SSL can be used to calibrate the camera-radar sensors and align their outputs, as shown in the image on the right.
Summary
ADAS are complex with multiple sensor modalities that require different types of intrinsic and extrinsic calibration. Additionally, the overall ADAS operation requires multimodal calibration, utilizing a combination of target-based and targetless calibration methods. Recently, SSL techniques have been applied to targetless ADAS sensors to deliver improved calibration results.
References
A Multi-sensor Calibration Toolbox for Autonomous Driving, arXiv
An Auto-Calibrating System for Sensors in Autonomous Vehicles, KPIT Technologies
Enhancing lane detection with a lightweight collaborative late fusion model, Science Direct
How to Calibrate Sensors with MSA Calibration Anywhere for NVIDIA Isaac Perceptor, Nvidia Developer
Joint Calibration of a Multimodal Sensor System for Autonomous Vehicles, MDPI sensors
Multilevel Data and Decision Fusion Using Heterogeneous Sensory Data for Autonomous Vehicles, MDPI remote sensing
Physics and semantic informed multi-sensor calibration via optimization theory and self-supervised learning, Scientific Reports
Probability-Based LIDAR–Camera Calibration Considering Target Positions and Parameter Evaluation Using a Data Fusion Map, MDPI sensors
The Complete Guide to ADAS Calibration,John Bean
EEWorld Online related content
What are the mathematics that enable sensor fusion?
What determines the connectivity bandwidth needed in a machine vision system?
How will neurotechnology and sensing impact automotive: part 1
What are the types and uses of position and angle sensors in an EV?
If you are working with video signal processing here are some tools to consider, Part 1