Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial,
temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample
data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing
chain capable of “fusing” image data from multiple independent and asynchronous sensors into a form amenable to
analysis and exploitation using commercially-available tools.
Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2)
Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration
algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal
upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution
imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor’s imagery.
Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion.
This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that
from the previous coarser-resolution image.
Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed
by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up
to a coarser resolution LWIR camera.
The application of compression to hyperspectral image data is a significant technical challenge. A primary bottleneck in disseminating data products to the tactical user community is the limited communication bandwidth between the airborne sensor and the ground station receiver. This report summarizes the newly-developed “Z-Chrome” algorithm for lossless compression of hyperspectral image data. A Wiener filter prediction framework is used as a basis for modeling new image bands from already-encoded bands. The resulting residual errors are then compressed using available state-of-the-art lossless image compression functions. Compression performance is demonstrated using a large number of test data collected over a wide variety of scene content from six different airborne and spaceborne sensors .
There is a need for a Precision Radiometric Surface Temperature (PRST) measurement capability that can achieve noncontact profiling of a sample’s surface temperature when heated dynamically during laser processing, aerothermal heating or metal cutting/machining. Target surface temperature maps within and near the heated spot provide critical quantitative diagnostic data for laser-target coupling effectiveness and laser damage assessment. In the case of metal cutting, this type of measurement provides information on plastic deformation in the primary shear zone where the cutting tool is in contact with the workpiece. The challenge in these cases is to measure the temperature of a target while its surface’s temperature and emissivity are changing rapidly and with incomplete knowledge of how the emissivity and surface texture (scattering) changes with temperature. Bodkin Design and Engineering, LLC (BDandE), with partners Spectral Sciences, Inc. (SSI) and Space Computer Corporation (SCC), has developed a PRST Sensor that is based on a hyperspectral MWIR imager spanning the wavelength range 2-5 μm and providing a hyperspectral datacube of 20-24 wavelengths at 60 Hz frame rate or faster. This imager is integrated with software and algorithms to extract surface temperature from radiometric measurements over the range from ambient to 2000K with a precision of 20K, even without a priori knowledge of the target’s emissivity and even as the target emissivity may be changing with time and temperature. In this paper, we will present a description of the PRST system as well as laser heating test results which show the PRST system mapping target surface temperatures in the range 600-2600K on a variety of materials.
Hyperspectral imaging has important benefits in remote sensing and target discrimination applications. This paper
describes a class of snapshot-mode hyperspectral imaging systems which utilize a unique optical processor that provides
video-rate hyperspectral datacubes. This system consists of numerous parallel optical paths which collect the full threedimensional
(two spatial, one spectral) hyperspectral datacube with each video frame and are ideal for recording data
from transient events, or on unstable platforms.
We will present the results of laboratory and field-tests for several of these imagers operating at visible, near-infrared,
MWIR and LWIR wavelengths. Measurement results for nitrate detection and identification as well as additional
chemical identification and analysis will be presented.
In EO tracking, target spatial and spectral features can be used to improve performance since they help distinguish the
targets from each other when confusion occurs during normal kinematic tracking. In this paper we introduce a method
to encode a target's descriptive spatial information into a multi-dimensional signature vector, allowing us to convert the
problem of spatial template matching into a form similar to spectral signature matching. This allows us to leverage
multivariate algorithms commonly used with hyperspectral data to the problem of exploiting panchromatic imagery. We
show how this spatial signature formulation naturally leads to a hybrid spatial-spectral descriptor vector that supports
exploitation using commonly-used spectral algorithms.
We introduce a new descriptor called Spectral DAISY for encoding spatial information into a signature vector, based on
the concept of the DAISY dense descriptor. We demonstrate the process on real data and show how the combined
spatial/spectral feature can be used to improve target/track association over spectral or spatial features alone.
Hyperspectral imagers tend to have lower spatial resolution than multispectral ones. This often results in a (sometimes
difficult) trade-off between spectral and spatial resolution. We have developed a technique, called CRISP, that combines
low-resolution hyperspectral data and high-resolution multispectral data to produce high quality, high-resolution
hyperspectral data. This technique shows good quantitative performance when applied to realistic applications such as
land cover estimation and anomaly detection. As a test of this technique, we have performed an experiment using
HyMap hyperspectral data and multispectral instruments over the coast waters of Oahu, Hawaii. The accuracy of the
CRISP sharpening approach when used for coastal applications such as depth mapping is assessed.
Hyperspectral imaging has important benefits in remote sensing and material identification.
This paper describes a class of hyperspectral imaging systems which utilize a novel optical
processor that provides video-rate hyperspectral datacubes. These systems have no moving
parts and do not operate by scanning in either the spatial or spectral dimension. They are
capable of recording a full three-dimensional (two spatial, one spectral) hyperspectral datacube
with each video frame, ideal for recording data on transient events, or from unstabilized
platforms. We will present the results of laboratory and field-tests for several of these imagers
operating in the visible, near-infrared, mid-wavelength infrared (MWIR) and long-wavelength
infrared (LWIR) regions.
We have developed a new and innovative technique for combining a high-spatial-resolution multispectral image with a
lower-spatial-resolution hyperspectral image. The approach, called CRISP, compares the spectral information present
in the multispectral image to the spectral content in the hyperspectral image and derives a set of equations to
approximately transform the multispectral image into a synthetic hyperspectral image. This synthetic hyperspectral
image is then recombined with the original low-spatial-resolution hyperspectral image to produce a sharpened product.
The result is a product that has the spectral properties of the hyperspectral image at a spatial resolution approaching
that of the multispectral image. To test the accuracy of the CRISP method, we applied the method to synthetic data
generated from hyperspectral images acquired with an airborne sensor. These high-spatial-resolution images were used
to generate both a lower-spatial-resolution hyperspectral data set and a four-band multispectral data set. With this
method, it is possible to compare the output of the CRISP process to the 'truth data' (the original scene). In all of these
controlled tests, the CRISP product showed both good spectral and visual fidelity, with an RMS error less than one
percent when compared to the 'truth' image. We then applied the method to real world imagery collected by the
Hyperion sensor on EO-1 as part of the Hurricane Katrina support effort. In addition to multiple Hyperion data sets,
both Ikonos and QuickBird data were also acquired over the New Orleans area. Following registration of the data sets,
multiple high-spatial-resolution CRISP-generated hyperspectral data sets were created. In this paper, we present the
results of this study that shows the utility of the CRISP-sharpened products to form material classification maps at four-meter
resolution from space-based hyperspectral data. These products are compared to the equivalent products
generated from the source 30m resolution Hyperion data.
Hyperspectral imagers tend to have lower spatial resolution than multispectral ones. This often results in a (sometimes difficult) trade-off between spectral and spatial resolution. One means of addressing this spatial/spectral resolution trade-off is to acquire both multispectral and hyperspectral data simultaneously, and then combine the two to produce a hyperspectral image with the high spatial resolution of the multispectral image. This process, called 'sharpening', results in a product that fuses the rich spectral content of a hyperspectral image with the high spatial content of the multispectral image. The approach we have been investigating compares the spectral information present in the multispectral image to the spectral content in the hyperspectral image and derives a set of equations to approximately transform the multispectral image into a synthetic hyperspectral image. This synthetic hyperspectral image is then recombined with the original low-spatial-resolution hyperspectral image to produce a sharpened product. We have evaluated this technique against several types of data for terrain classification and it has demonstrated good performance across all data sets. The spectra predicted by the sharpening algorithm match truth spectra in synthetic image tests, and performance with detection algorithms show little, if any, degradation of detection performance.
Multispectral sharpening of hyperspectral imagery fuses the spectral content of a hyperspectral image with the spatial and spectral content of the multispectral image. The approach we have been investigating compares the spectral information present in the multispectral image to the spectral content in the hyperspectral image and derives a set of equations to approximately transform the multispectral image into a synthetic hyperspectral image. This synthetic hyperspectral image is then recombined with the original low-resolution hyperspectral image to produce a sharpened product. We evaluate this technique against several types of data, showing good performance across with all data sets. Recent improvements in the algorithm allow target detection to be performed without loss of performance even at extreme sharpening ratios.
Robust, timely, and remote detection of mines and minefields is central to both tactical and humanitarian demining efforts, yet remains elusive for single-sensor systems. Here we present an approach to jointly exploit multisensor data for detection of mines from remotely sensed imagery. LWIR, MWIR, laser, multispectral, and radar sensor have been applied individually to the mine detection and each has shown promise for supporting automated detection. However, none of these sources individually provides a full solution for automated mine detection under all expected mine, background and environmental conditions. Under support from Night Vision and Electronic Sensors Directorate (NVESD) we have developed an approach that, through joint exploitation of multiple sensors, improves detection performance over that achieved from a single sensor. In this paper we describe the joint exploitation method, which is based on fundamental detection theoretic principles, demonstrate the strength of the approach on imagery from minefields, and discuss extensions of the method to additional sensing modalities. The approach uses pre-threshold anomaly detector outputs to formulate accurate models for marginal and joint statistics across multiple detection or sensor features. This joint decision space is modeled and decision boundaries are computed from measured statistics. Since the approach adapts the decision criteria based on the measured statistics and no prior target training information is used, it provides a robust multi-algorithm or multisensor detection statistic. Results from the joint exploitation processing using two different imaging sensors over surface mines acquired by NVESD will be presented to illustrate the process. The potential of the approach to incorporate additional sensor sources, such as radar, multispectral and hyperspectral imagery is also illustrated.
Unsupervised classification of multispectral and hyperspectral data is useful for a range of military and commercial remote sensing applications. These include terrain categorization, material detection and identification, and land use quantification. Here we show the development and application of an adaptive Gaussian Spectral Clustering approach to unsupervised classification of hyperspectral data. The method is built on adaptively estimating the parameters of a Gaussian mixture model from over local regions, and includes methods for adjusting to inevitable non-stationarity of hyperspectral image data. The algorithm is suitable for application to streaming hyperspectral data as would be required for real-time applications. In this paper we outline the model used, estimation techniques, and methods for adaptively estimating key model parameters required to characterize hyperspectral imagery. The key elements of the approach are demonstrated on reflective band hyperspectral data from NRL WarHORSE and NASA AVIRIS hyperspectral imagery.
KEYWORDS: Signal to noise ratio, Transform theory, Signal attenuation, Detection and tracking algorithms, Quantization, Signal detection, Sensors, Optical filters, Hyperspectral imaging, Image filtering
Hyperspectral images may be collected in tens to hundreds of spectral bands having band widths on the order of 1-10 nanometers. Principal component (PC), maximum-noise-fraction (MNF), and vector quantization (VQ) transforms are used for dimension reduction and subspace selection. The impact of the PC, MNF, and VQ transforms on image quality are measured in terms of mean-squared error, image-plus-noise variance to noise variance, and maximal-angle error, respectively. These transforms are not optimal for detection problems. The signal-to-noise ratio (SNR) is a fundamental parameter for detection and classification. In particular, for additive signals in a normally distributed background, the performance of the matched filter depends on SNR, and the performance of the quadratic anomaly detector depends on SNR and the number of degrees-of-freedom. In this paper we demonstrate the loss in SNR that can occur from the application of the PC, MNF, and VQ transforms. We define a whitened-vector-quantization (WVQ) transform that can be used to reduce the dimension of the data such that the loss in SNR is bounded, and we construct a transform (SSP) that preserves SNR for signals contained in a given subspace such that the dimension of the image of the transform is the dimension of the subspace.
Hyperspectral data provides the opportunity to perform a classification of scene data by either deterministic or stochastic techniques. A typical deterministic technique is linear unmixing. This involves finding certain basis spectra called 'end-members' within the scene. Once these spectra are found, the image cube can be unmixed into a map of fractional abundances of each material in each pixel. The N-FINDR algorithm autonomously finds these end-member spectra within the data and then unmixes the scene by determining the fraction of each end-member in each pixel. A stochastic technique for characterizing spectral classes is the Stochastic Expectation Maximization (SEM) approach. This is a spectral clustering technique for classifying spectral terrain data that involves iterative estimation of a Gaussian mixture fit to spectral data. Both techniques can be misled by commonly occurring sensor defects. This is a particular problem with the new class of pushbroom hyperspectral sensors that use a two-dimensional focal plane. These defects are often caused by errors in the calibration process and bad detectors. They manifest themselves in the data as spectrally dependent shading and/or striping and are usually the limit to the performance of the sensor. It is the purpose of this paper to investigate the effect of these sensor defects on the two different classes of algorithms using the N-FINDR and SEM algorithms. Results from actual data are presented.
In recent years a number of techniques for automated classification of terrain from spectral data have been developed and applied to multispectral and hyperspectral data. Use of these techniques for hyperspectral data has presented a number of technical and practical challenges. Here we present a comparison of two fundamentally different approaches to spectral classification of data: (1) Stochastic Expectation Maximization (SEM), and (2) linear unmixing. The underlying background clutter models for each are discussed and parallels between them are explored. Parallels are drawn between estimated parameters or statistics obtained from each type of method. The mathematical parallels are then explored through application of these clutter models to airborne hyperspectral data from the NASA AVIRIS sensor. The results show surprising similarity between some of the estimates derived from these two clutter models, despite the major differences in the underlying assumptions of each.
The ability to detect weak targets of low contrast or signal-to- noise ratio (SNR) is improved by a fusion of data in space and wavelength from multispectral/hyperspectral sensors. It has been demonstrated previously that the correlation of the clutter between multiband thermal infrared images plays an important role in allowing the data collected in one spectral band to be used to cancel the background clutter in another spectral band, resulting in increased SNR. However, the correlation between bands is reduced when the spectrum observed in each pixel is derived from a mixture of several different materials, each with its own spectral characteristics. In order to handle the identification of objects in this complex (mixed) clutter, a class of algorithms have been developed that model the pixels as a linear combination of pure substances and then unmix the spectra to identify the pixel constituents. In this paper a linear unmixing algorithm is incorporated with a statistical hypothesis test for detecting a known target spectral feature that obeys a linear mixing model in a mixture of background noise. The generalized linear feature detector utilizes a maximum likelihood ratio approach to detect and estimate the presence and concentration of one or more specific objects. A performance evaluation of the linear unmixing and maximum likelihood detector is shown by comparing the results to the spectral anomaly detection algorithm previously developed by Reed and Yu.
Multispectral and hyperspectral infrared (IR) sensors have been utilized in the detection of ground targets by exploiting differences in the statistical distribution of the spectral radiance between natural clutter and targets. Target classification by hyperspectral sensors such as the Spatially Modulated Imaging Fourier Transform SPectrometer (SMIFTS) sensor, a mid-wave infrared imager, depends on exploiting target phenomenology in the infrared. Determination of robust components from hyperspectral IR sensors that are useful for discriminating targets is a key issue in classification of ground targets. Both synthetic aperture radars (SAR) and IR imagers have been utilized in the target detection and recognition processes. Improved target classification by sensor fusion depends on exploitation of target phenomenology from both of these sensors. Here we show the results of an investigation of the use of hyperspectral infrared and low-frequency SAR signatures for the purpose of target recognition. Features extracted from both sensors on similar targets are examined in terms of their usefulness in separating between various classes of targets. Simple distance measures are computed to determine the potential for classifying targets based on a fusion of SAR and hyperspectral infrared data. These separability measures are applied to measurements on similar vehicle targets obtained from separate experiments involving the SMIFTS hyperspectral imager and the Stanford Research Institute SAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.