Presentation + Paper
18 June 2024 Inpainting sparse scenes through physics aware transformers for single-photon LiDAR
Luke McEvoy, Daniel Tafone, Yong Meng Sua, Yuping Huang
Author Affiliations +
Abstract
We increase single-photon LiDAR capabilities via a hardware-accelerating inpainting transformer model. This model reconstructs all non-observed information within the image plane as it communicates with the beam steering hardware. We apply this to 3D time-of-flight (ToF) reconstruction, where objects obstruct each other’s line of sight. We use ToF histograms to distinguish objects within either the foreground and background, and their overlap will be treated as the dynamic mask for the model to reconstruct. We also employ this to unorthodox scanning patterns such as Lissajous and spiral, which are riddled with sparsity. Lastly, we are developing an AI MEMs system, which intelligently downsamples the image plane based off foreground masks, combating sampling redundancy. We believe that our approach will be useful in applications for imaging and sensing dynamic targets with sparse single-photon data across all domains.
Conference Presentation
© (2024) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Luke McEvoy, Daniel Tafone, Yong Meng Sua, and Yuping Huang "Inpainting sparse scenes through physics aware transformers for single-photon LiDAR", Proc. SPIE 12996, Unconventional Optical Imaging IV, 129960K (18 June 2024); https://doi.org/10.1117/12.3014641
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
LIDAR

3D mask effects

3D modeling

Transformers

3D image reconstruction

Image restoration

3D acquisition

Back to Top