Presentation + Paper
12 April 2021 CalibDNN: multimodal sensor calibration for perception using deep neural networks
Author Affiliations +
Abstract
Current perception systems often carry multimodal imagers and sensors such as 2D cameras and 3D LiDAR sensors. To fuse and utilize the data for downstream perception tasks, robust and accurate calibration of the multimodal sensor data is essential. We propose a novel deep learning-driven technique (CalibDNN) for accurate calibration among multimodal sensor, specifically LiDAR-Camera pairs. The key innovation of the proposed work is that it does not require any specific calibration targets or hardware assistants, and the entire processing is fully automatic with a single model and single iteration. Results comparison among different methods and extensive experiments on different datasets demonstrates the state-of-the-art performance.
Conference Presentation
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ganning Zhao, Jiesi Hu, Suya You, and C.-C. Jay Kuo "CalibDNN: multimodal sensor calibration for perception using deep neural networks", Proc. SPIE 11756, Signal Processing, Sensor/Information Fusion, and Target Recognition XXX, 117561D (12 April 2021); https://doi.org/10.1117/12.2587994
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Neural networks

Image registration

Calibration

LIDAR

Clouds

Data modeling

Back to Top