The composition of multiple-layer Light Detection And Ranging (LiDAR) and camera is commonly used in autonomous perception systems. Complementary information of these sensors is instrumental in the reliable surrounding perception. However, it is a difficult work for obtaining the extrinsic parameters between LiDAR and camera, which must be known for some perception algorithms. In this study, we present a method, using only three 3D-2D correspondences to compute the extrinsic parameters between Velodyne-VLP16 LiDAR and monocular camera. The procedure is that 3D and 2D features are extracted respectively from the point cloud and image of a custom calibration target and then the extrinsic parameters are obtained based on these features by the perspective-3-point (P3P) algorithm. Outliers with minimum energy at geometrical discontinuities of target are used as control points for extracting key features in LiDAR point cloud. Moreover, a novel method is presented to distinguish the correct solution from multiple P3P solutions. The method depends on conic shape discrepancies in spaces of the different solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.