Airborne LIDAR sensors can produce accurate 3D point clouds for terrain mapping at different altitudes. As the altitude increases, there is a need for larger aperture sizes to ensure the collection of sufficient photons and the preservation of spatial resolution. In the case of conical scanning optical systems, axially spinning refractive wedges can be used to cover a scan across the field of regard. Nevertheless, maintaining rotational balance for refractive wedges proves challenging, particularly at angles exceeding several degrees, due to their asymmetric moment of inertia. In contrast, a holographic optical element serves as an alternative scanning optic with a symmetric moment of inertia, effectively addressing stability concerns associated with substantial scan angles compared to refractive wedge-based scanners. Our study highlights that HOEs can accommodate a wide range of scan angles and aperture sizes without compromising volumetric constraints or stability, showcasing their effectiveness in optical scanning for LIDAR sensors.
Over the past 15 years the Massachusetts Institute of Technology, Lincoln Laboratory (MIT/LL), Defense Advanced Research Projects Agency (DARPA) and private industry have been developing airborne LiDAR systems based on arrays of Geiger-mode Avalanche Photodiode (GmAPD) detectors capable of detecting a single photon. The extreme sensitivity of GmAPD detectors allows operation of LiDAR sensors at unprecedented altitudes and area collection rates in excess of 1,000 km2/hr. Up until now the primary emphasis of this technology has been limited to defense applications despite the significant benefits of applying this technology to non-military uses such as mapping, monitoring critical infrastructure and disaster relief. This paper briefly describes the operation of GmAPDs, design and operation of a Geiger-mode LiDAR, a comparison of Geiger-mode and traditional linear mode LiDARs, and a description of the first commercial Geiger-mode LiDAR system, the IntelliEarth™ Geospatial Solutions Geiger-mode LiDAR sensor.
Mohan Vaidyanathan, Steven Blask, Thomas Higgins, William Clifton, Daniel Davidsohn, Ryan Carson, Van Reynolds, Joanne Pfannenstiel, Richard Cannata, Richard Marino, John Drover, Robert Hatch, David Schue, Robert Freehart, Greg Rowe, James Mooney, Carl Hart, Byron Stanley, Joseph McLaughlin, Eui-In Lee, Jack Berenholtz, Brian Aull, John Zayhowski, Alex Vasile, Prem Ramaswami, Kevin Ingersoll, Thomas Amoruso, Imran Khan, William Davis, Richard Heinrichs
KEYWORDS: Sensors, LIDAR, 3D image processing, 3D acquisition, Target detection, Imaging systems, Image processing, Control systems, Image sensors, Data processing
Jigsaw three-dimensional (3D) imaging laser radar is a compact, light-weight system for imaging
highly obscured targets through dense foliage semi-autonomously from an unmanned aircraft. The
Jigsaw system uses a gimbaled sensor operating in a spot light mode to laser illuminate a cued
target, and autonomously capture and produce the 3D image of hidden targets under trees at high 3D
voxel resolution. With our MIT Lincoln Laboratory team members, the sensor system has been
integrated into a geo-referenced 12-inch gimbal, and used in airborne data collections from a UH-1
manned helicopter, which served as a surrogate platform for the purpose of data collection and
system validation. In this paper, we discuss the results from the ground integration and testing of the
system, and the results from UH-1 flight data collections. We also discuss the performance results
of the system obtained using ladar calibration targets.
Recently-developed airborne imaging laser radar systems are capable of rapidly collecting accurate and precise spatial information for topographic characterization as well as surface imaging. However, the performance of airborne ladar (laser detection and ranging) collection systems often depends upon the density and distribution of tree canopy over the area of interest, which obscures the ground and objects close to the ground such as buildings or vehicles. Traditionally, estimates of canopy obscuration are made using ground-based methods, which are time-consuming, valid only for a small area and specific collection geometries when collecting data from an airborne platform. Since ladar systems are capable of collecting a spatially and temporally dense set of returns in 3D space, the return reflections can be used to differentiate and monitor the density of ground and tree canopy returns in order to measure, in near real-time, sensor performance for any arbitrary collection geometry or foliage density without relying on ground based measurements. Additionally, an agile airborne ladar collection system could utilize prior estimates of the degree and spatial distribution of the tree canopy for a given area in order to determine optimal geometries for future collections. In this paper, we report on methods to rapidly quantify the magnitude and distribution of the spatial structure of obscuring canopy for a series of airborne high-resolution imaging ladar collections in a mature, mixed deciduous forest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.