In response to the 2010 Haiti earthquake, the ALIRT ladar system was tasked with collecting surveys to
support disaster relief efforts. Standard methodologies to classify the ladar data as ground, vegetation, or
man-made features failed to produce an accurate representation of the underlying terrain surface. The majority
of these methods rely primarily on gradient- based operations that often perform well for areas with low
topographic relief, but often fail in areas of high topographic relief or dense urban environments. An
alternative approach based on a adaptive lower envelope follower (ALEF) with an adaptive gradient operation
for accommodating local slope and roughness was investigated for recovering the ground surface from the
ladar data. This technique was successful for classifying terrain in the urban and rural areas of Haiti over
which the ALIRT data had been acquired.
Mohan Vaidyanathan, Steven Blask, Thomas Higgins, William Clifton, Daniel Davidsohn, Ryan Carson, Van Reynolds, Joanne Pfannenstiel, Richard Cannata, Richard Marino, John Drover, Robert Hatch, David Schue, Robert Freehart, Greg Rowe, James Mooney, Carl Hart, Byron Stanley, Joseph McLaughlin, Eui-In Lee, Jack Berenholtz, Brian Aull, John Zayhowski, Alex Vasile, Prem Ramaswami, Kevin Ingersoll, Thomas Amoruso, Imran Khan, William Davis, Richard Heinrichs
KEYWORDS: Sensors, LIDAR, 3D image processing, 3D acquisition, Target detection, Imaging systems, Image processing, Control systems, Image sensors, Data processing
Jigsaw three-dimensional (3D) imaging laser radar is a compact, light-weight system for imaging
highly obscured targets through dense foliage semi-autonomously from an unmanned aircraft. The
Jigsaw system uses a gimbaled sensor operating in a spot light mode to laser illuminate a cued
target, and autonomously capture and produce the 3D image of hidden targets under trees at high 3D
voxel resolution. With our MIT Lincoln Laboratory team members, the sensor system has been
integrated into a geo-referenced 12-inch gimbal, and used in airborne data collections from a UH-1
manned helicopter, which served as a surrogate platform for the purpose of data collection and
system validation. In this paper, we discuss the results from the ground integration and testing of the
system, and the results from UH-1 flight data collections. We also discuss the performance results
of the system obtained using ladar calibration targets.
Recently-developed airborne imaging laser radar systems are capable of rapidly collecting accurate and precise spatial information for topographic characterization as well as surface imaging. However, the performance of airborne ladar (laser detection and ranging) collection systems often depends upon the density and distribution of tree canopy over the area of interest, which obscures the ground and objects close to the ground such as buildings or vehicles. Traditionally, estimates of canopy obscuration are made using ground-based methods, which are time-consuming, valid only for a small area and specific collection geometries when collecting data from an airborne platform. Since ladar systems are capable of collecting a spatially and temporally dense set of returns in 3D space, the return reflections can be used to differentiate and monitor the density of ground and tree canopy returns in order to measure, in near real-time, sensor performance for any arbitrary collection geometry or foliage density without relying on ground based measurements. Additionally, an agile airborne ladar collection system could utilize prior estimates of the degree and spatial distribution of the tree canopy for a given area in order to determine optimal geometries for future collections. In this paper, we report on methods to rapidly quantify the magnitude and distribution of the spatial structure of obscuring canopy for a series of airborne high-resolution imaging ladar collections in a mature, mixed deciduous forest.
Automatic and timely image registration and alignment for producing highly accurate geodetic coordinates is of interest to tactical systems involved in battlespace awareness. We present an approach to registration that applies rigorous photogrammetric techniques to sensor geometry models to achieve registration accuracy of only a few pixels. Image collection is fully modeled in terms of its static geometry including aircraft and sensor parameters. The registration process not only aligns imagery, but also significantly reduces geoposition errors when multiple images are used. A normalized cross- correlation is applied to align image pixels through adjustments to the initial collection geometry. Our process is fully automatic and requires no operator intervention. This technique has a side benefit that the amount of time to register images is somewhat independent of the image size. Registration can be applied to imagery from disparate sensors, such as Synthetic Aperture Radar (SAR), Electro- Optical (EO), Multi-Spectral, and Infrared, in a multi- sensor fusion approach to reduce geodetic errors. This approach is implemented on standard Commercial-Off-The-Shelf hardware and has been tested on SAR and EO imagery at near real-time processing rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.