In this undergraduate research project, we use LiDAR mapping for object detection and further combine AI and computer vision algorithms to enable robots to safely drive the vehicle in a given environment. AI and computer vision technologies allow the robot to identify lanes and intersections, enabling vehicle navigation, while LiDAR mapping quickly and accurately determines the depth between the vehicle and objects entering a specific area. This capability allows the robot to temporarily stop the vehicle, preventing collisions with objects. Through these technologies, our goal is to prevent collisions that may occur during driving, ensuring pedestrian safety and enabling safe robot-driven vehicle operation in crowded places. Simulation and test have been conducted to verify the proposed methods.
Accurate object detection and depth estimation is critical for a variety of applications such as autonomous driving and robotics. In the context of object avoidance, one may use a LiDAR sensor to determine the position of nearby objects but, due to a lack of resolution, these sensors cannot be used to accurately categorize and label the object being detected. To contrast this, RGB cameras can provide rich semantic information, which can be used to categorize and segment an object but cannot provide accurate depth data. To overcome this, an abundance of algorithms has been created which are capable of fusing the two sensors, among others, allowing for accurate depth detection and segmentation of a given object. The problem with many of these systems is that they are complex in their approach and create 3D bounding boxes, which can result in an agent taking a less optimal path due to the size of the perceived object. The proposed approach in this paper simply determines the position of an object in an RGB image, using a CNN, and then translates two dimensions, found through the center pixel of the bounding box, to a point cloud to identify and segment point clusters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.