Paper
20 August 2013 Reconstruction of 3D scenes from sequences of images
Author Affiliations +
Proceedings Volume 8913, International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology; 89130H (2013) https://doi.org/10.1117/12.2033043
Event: ISPDI 2013 - Fifth International Symposium on Photoelectronic Detection and Imaging, 2013, Beijing, China
Abstract
Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It’s a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3D display. According to the experimental results, we can reconstruct a 3D point cloud model more quickly and efficiently than other methods.
© (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Bei Niu, Xinzhu Sang, Duo Chen, and Yuanfa Cai "Reconstruction of 3D scenes from sequences of images", Proc. SPIE 8913, International Symposium on Photoelectronic Detection and Imaging 2013: Optical Storage and Display Technology, 89130H (20 August 2013); https://doi.org/10.1117/12.2033043
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
3D modeling

Cameras

Clouds

Visual process modeling

Calibration

Reverse modeling

Systems modeling

RELATED CONTENT


Back to Top