KEYWORDS: LIDAR, Sensors, Cameras, Environmental sensing, Robotics, Monte Carlo methods, Mobile robots, 3D metrology, Tunable filters, Stereoscopic cameras
This paper presents the implementation of localization algorithms for indoor autonomous mobile robots in known environments. The proposed implementation employs two sensors, an RGB-D camera and a 2D LiDAR to detect the environment and map an occupancy grid that allows the robot to perform autonomous/remote navigation throughout the environment while localizing itself. The implementation uses the data retrieved from the perception sensors and odometry to estimate the position of the robot through the Monte Carlo Localization algorithm. The proposed implementation employs the Robot Operating System (ROS) framework on an NVIDIA Jetson TX2 and the Turtlebot 2. Experimental results were considered using a physical implementation of the mobile robot in an indoor environment.
This paper presents the implementation of a driving assistance algorithm based on semantic segmentation. The proposed implementation uses a convolutional neural network architecture known as U-Net to perform the image segmentation of traffic scenes taken by the self-driving car during the navigation, the segmented image gives to every pixel a specific class. The driving assistance algorithm uses the data retrieved from the semantic segmentation to perform an evaluation of the environment and provide the results to the self-driving car to help it make a decision. The evaluation of the algorithm is based on the frequency of the pixels of each class, and on an equation that calculates the importance weight of a pixel with its own specific position and its respective class. Experimental results are presented to evaluate the feasibility of the proposed implementation.
KEYWORDS: Sensors, Mobile robots, Cameras, Environmental sensing, Computer simulations, Monte Carlo methods, Navigation systems, 3D modeling, Robotics, Mathematical modeling
This paper presents the implementation of a simultaneous localization and mapping (SLAM) algorithm for autonomous mobile robot navigation. The proposed implementation uses an RGB-D camera to detect the environment and map an occupancy grid that allows the mobile robot to perform autonomous navigation through the environment. The implementation employs the Robot Operating System (ROS) and the Adaptive Monte Carlo Localization to estimate the mobile robot’s current position in the environment with the data retrieved from the RGB-D camera and the odometry data. The mobile robot performs autonomous navigation considering if the robot can safely navigate while avoiding obstacles. Experimental results are presented to validate the implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.