Point cloud has achieved great attention in 3D object classification, segmentation and indoor scene semantic parsing. In terms of face recognition, although image-based algorithm become more accurate and faster, open world face recognition still suffers from the influences i.e. illumination, occlusion, pose, etc. 3D face recognition based on point cloud containing both shape and texture information can compensate these shortcomings. However training a network to extract discriminative 3D feature is model complex and time inefficient due to the lack of large training dataset. To address these problems, we propose a novel 3D face recognition network(FPCNet) using modified PointNet++ and a 3D augmentation technique. Face-based loss and multi-label loss are used to train the FPCNet to enhance the learned features more discriminative. Moreover, a 3D face data augmentation method is proposed to synthesize more identity-variance and expression-variance 3D faces from limited data. Our proposed method shows excellent recognition results on CASIA-3D, Bosphorus and FRGC2.0 datasets and generalizes well for other datasets.
Defocusing phase-shifting profilometry is now widely utilized in fast 3-D shape measurements for its high efficiency and accuracy. However, motion-induced ripples and phase unwrapping errors can still be observed while the object measured is moving. An efficient motion-compensated defocusing phase-shifting profilometry, which accurately estimates the motion-induced shifts regarding each pixel, both on the image plane and the phase map, to reduce the artifacts occurring in the dynamic scenes, is proposed. The phase error is analyzed by N-step phase-shifting theory, and with the help of optical flow method and five-step phase-shifting method; this error can be processed according to the motion information obtained. The phase unwrapping errors are also fixed with the proposed motion-compensated algorithm. Compared with the traditional phase-shifting profilometry and state-of-the-art methods, our method demonstrates competitive performance on dynamic objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.