There are many deep based semantic segmentation algorithms to strengthen robot perception. However, most existing methods hardly emphasis on the apply of deep neural networks. Therefore, this paper proposes a new Semantic Simultaneous Localization and Mapping system based on DeeplabV3+. The system can accurately reconstruct the 3D dense model of observed environment. Firstly, the Deeplabv3+ semantic extractor with the best visual effect is trained supervisedly, and the robot motion estimation is completed by the ORBSLAM2 framework. Subsequently, the semantic images, the depth images and the pose transformation matrix are sent to an efficient mapping module to fuse an accurate semantic model. Experimental results show that the constructed map can reflect the real distribution of objects in the scene, and the system perform well in the standard TUM datasets. The proposed method with loose coupling of novel segmentation networks can efficiently reduce the complexity of semantic SLAM system. Moreover, this method can improve the performance in the evolution of semantic segmentation network and SLAM framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.