PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Current perception systems often carry multimodal imagers and sensors such as 2D cameras and 3D LiDAR sensors. To fuse and utilize the data for downstream perception tasks, robust and accurate calibration of the multimodal sensor data is essential. We propose a novel deep learning-driven technique (CalibDNN) for accurate calibration among multimodal sensor, specifically LiDAR-Camera pairs. The key innovation of the proposed work is that it does not require any specific calibration targets or hardware assistants, and the entire processing is fully automatic with a single model and single iteration. Results comparison among different methods and extensive experiments on different datasets demonstrates the state-of-the-art performance.
Ganning Zhao,Jiesi Hu,Suya You, andC.-C. Jay Kuo
"CalibDNN: multimodal sensor calibration for perception using deep neural networks", Proc. SPIE 11756, Signal Processing, Sensor/Information Fusion, and Target Recognition XXX, 117561D (12 April 2021); https://doi.org/10.1117/12.2587994
ACCESS THE FULL ARTICLE
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Ganning Zhao, Jiesi Hu, Suya You, C.-C. Jay Kuo, "CalibDNN: multimodal sensor calibration for perception using deep neural networks," Proc. SPIE 11756, Signal Processing, Sensor/Information Fusion, and Target Recognition XXX, 117561D (12 April 2021); https://doi.org/10.1117/12.2587994