Diffusion learning is a generative technique commonly applied to create new images or audio directly from sampled noise. The machine learning approach works by applying degrading signals, such as noise, continuously and learning the denoising process with a neural network. In place of noise, other operations can be performed, such as the addition of atmosphere effects using a physics-based radiative transport code. In this paper, we explore coupling the MODTRAN software to a diffusion learning framework. The goal is to apply atmosphere systematically for a variety of reflective surfaces and use diffusion learning to train models for atmospheric correction. To achieve this, we generate a scoped dataset containing randomized Lambertian surfaces with differing solar illumination and surface angles.
KEYWORDS: Target detection, Shadows, Reflectivity, Simulations, Scene simulation, Light sources and illumination, Monte Carlo methods, Image segmentation, Atmospheric modeling, Signal detection
This paper examines target detection statistics when scene illumination changes from full solar illumination to scenes which are topographically shadowed as well as scenes under twilight conditions. The impact of scene shadowing is examined using forward simulations of hyperspectral scenes. The reference reflectance scene contains man-made elements as well as significant areas of vegetation, roads, and bare dirt. The digital elevation map of the area has been modified with the addition of a tall mountain ridge placed to the west of the reference scene. For the detection study a spectral signature from a blue roof retrieved from the scene was randomly embedded into the scene at subpixel levels. The scene radiance is then simulated for various solar zenith angles which produce fully shadowed, partly shadowed, and fully sunlit images. Target detection is performed on these simulated scenes after in-scene atmospheric correction. Standard detection methods yield degraded performance as portions of the scene become shadowed. Approaches that segregate the sunlit and shadowed areas are investigated and shown to improve detection results.
The THIA instrument is a visible through extended short-wave infrared (SWIR) imaging spectrometer. Designed using a solid block optical system and a single camera, the sensor is extremely compact with low power requirements. The spectrometer, manufactured by Corning, consists of reflective optical and grating surfaces diamond turned onto a single block of CaF2. The system has been flown repeatedly on a Matrice 600 hexacopter and on small aircraft for data collections. It operates from 0.4-2.45 microns, with high throughput due to the fast f/1.5 optics and has a total weight of 2.4 kg. THIA SNR was designed to exceed 100 over the full spectral range from 400 to 2450 nanometers under normal operating conditions and exceed 250 below 1700 nanometers. The first prototype system exhibits degraded throughput below 500 nanometers, but meets the SNR threshold over the rest of the range. Stray light backgrounds in the initial prototype require software correction. Despite these issues, the system has been used to obtain meaningful data. Here we characterize THIA Signal-to-Noise in flight conditions and compare results to predicted and benchtop performance.
KEYWORDS: Target detection, Atmospheric modeling, Monte Carlo methods, Image segmentation, Signal to noise ratio, Scattering, Target acquisition, Signal detection, Sensors, Sensor performance
Target detection under poorly illuminated conditions or in shadows is a challenging problem due to low signal and strong atmospheric scattering relative to well-illuminated pixels. We will use The Monte Carlo Scene (MCScene), a high fidelity and radiometrically accurate ray tracing model, to simulate targets in shadow and explore the parameter space that effects target detectability including variable illumination intensity and target spectral characteristics. An advantage of using MCScene for this type of simulation is that atmospheric effects, including the strong wavelength dependent aerosol scattering that contributes to diffuse illumination in shadows, are properly modeled.
MODTRAN models the molecular absorption for the entire 0 to 50,000 cm-1 spectral range. Typically, radiative transfer models define distinct line-shape functions depending on the spectral region. This can produce spectral anomalies at the transitions. A 3-parameter GrossDoppler line-shape function is defined that provides a spectrally-universal model for computing molecular absorption.
We describe a new algorithm, QUAC-IR (QUick Atmospheric Correction in the InfraRed), for automated, fast, atmospheric correction of LWIR (Long Wavelength InfraRed) hyperspectral imagery (HSI) and multi-spectral imagery (MSI) in the ~7-14 mm spectral region. QUAC-IR is an in-scene based algorithm, similar to the widely used ISAC (In- Scene Atmospheric Correction) algorithm. It improves upon the ISAC approach in several key ways, including providing absolute, versus relative, sensor-to-ground transmittances and radiances, as well as an estimate of the atmospheric downwelling sky radiance. The latter is important for retrieving emissivity from a reflective (i.e., non-blackbody) pixel. The key aspect of QUAC-IR is that it explicitly searches for blackbody pixels using an efficient approach involving a small number of spectral channels in which the atmospheric radiative transfer is dominated by the water continuum. This allows for fast and simplified Beer's Law (i.e., exponential) scaling of the path transmittance and radiance based on a compact library of pre-computed reference values. We apply QUAC-IR to well-calibrated data from the SEABASS1 and MAKO2 HSI sensors. The results are compared to those from a first-principles physics-based atmospheric code, FLAASH-IR.
KEYWORDS: Hyperspectral imaging, Detection and tracking algorithms, Algorithm development, Light sources and illumination, Data processing, Silver, Short wave infrared radiation, Gold, Sensors, Sun
Multispectral and hyperspectral imaging can facilitate vehicle tracking across a series of images by gathering spectral information that distinguishes the vehicle of interest from confusers. Developing effective algorithms for utilizing this information requires an understanding of the sources and nature of both the common and unique components in vehicle spectra, as well as the variations associated with lighting, view angle, and part of the vehicle being observed. In this study, focusing on the VNIR-SWIR spectral region, we analyze hyperspectral data from a recent field experiment at the Rochester Institute of Technology. We describe the spectra of painted vehicle surfaces in general terms, and demonstrate effective classification of automobiles based on spectra from upward facing surfaces (the roof, hood or trunk) using a method that combines the Support Vector Machine with data pre-conditioning.
Hyperspectral imaging (HSI) sensors have the ability to detect and identify objects within a scene based on the distinct attributes of their surface spectral signatures. Many targets of interest, such as vehicles, represent a complex arrangement of specular (non-Lambertian) materials with curved and flat surfaces oriented at varying view factors. This complexity, combined with possible changing atmospheric/illumination conditions and viewing geometries, can produce significant variations in the observed signatures from measurement to measurement, making detection and/or reacquisition challenging. This paper focuses on the characterization of visible-near infrared-short wave infrared (VNIR-SWIR) spectra for detection, identification and tracking of vehicles. Signature variations are predicted using a novel image simulation tool to calculate spectral images of complex 3D objects from a spectral material description such as the modified Beard-Maxwell BRDF model, a wireframe shape model, and a directional model of the illumination. We compare the simulations with recent VNIR-SWIR hyperspectral imagery of vehicles and panels collected at the Rochester Institute of Technology during an Autumn 2015 measurement campaign. Variations in both the simulated and measured spectra arise mainly from differences in the relative glint contribution. Implications of these variations on vehicle detection and identification are briefly discussed.
Surface solar radiation forecasting permits to predict photovoltaic plant production for a massive and safe integration of solar energy into the electric network. For short-term forecasts (intra-day), methods using images from meteorological geostationary satellites are more suitable than numerical weather prediction models. Forecast schemes consist in assessing cloud motion vectors and in extrapolating cloud patterns from a given satellite image in order to predict cloud cover state above a PV plant. Atmospheric motion vectors retrieval techniques have been studied for several decades in order to improve weather forecasts. However, solar energy forecasting requires the extraction of cloud motion vectors on a finer spatial- and time-resolution than those provided for weather forecast applications. Even if motion vector retrieval is a wide research field in image processing related topics, only block-matching techniques are operationally used for solar energy forecasts via satellite images. In this paper, we propose two motion vectors extraction methods originating from video compression techniques (correlation phase and optical flow methods). We implemented them on a 6-day dataset of Meteosat-10 satellite diurnal images. We proceeded to cloud pattern extrapolation and compared predicted cloud maps against actual ones at different time horizons from 15 minutes to 4 hours ahead. Forecast scores were compared to the state-of-the-art (block matching) method. Correlation phase methods do not outperform block-matching but their computation time is about 25 times shorter. Optical flow based method outperforms all the methods with a satisfactory time computing.
There is a need for a Precision Radiometric Surface Temperature (PRST) measurement capability that can achieve noncontact profiling of a sample’s surface temperature when heated dynamically during laser processing, aerothermal heating or metal cutting/machining. Target surface temperature maps within and near the heated spot provide critical quantitative diagnostic data for laser-target coupling effectiveness and laser damage assessment. In the case of metal cutting, this type of measurement provides information on plastic deformation in the primary shear zone where the cutting tool is in contact with the workpiece. The challenge in these cases is to measure the temperature of a target while its surface’s temperature and emissivity are changing rapidly and with incomplete knowledge of how the emissivity and surface texture (scattering) changes with temperature. Bodkin Design and Engineering, LLC (BDandE), with partners Spectral Sciences, Inc. (SSI) and Space Computer Corporation (SCC), has developed a PRST Sensor that is based on a hyperspectral MWIR imager spanning the wavelength range 2-5 μm and providing a hyperspectral datacube of 20-24 wavelengths at 60 Hz frame rate or faster. This imager is integrated with software and algorithms to extract surface temperature from radiometric measurements over the range from ambient to 2000K with a precision of 20K, even without a priori knowledge of the target’s emissivity and even as the target emissivity may be changing with time and temperature. In this paper, we will present a description of the PRST system as well as laser heating test results which show the PRST system mapping target surface temperatures in the range 600-2600K on a variety of materials.
This paper discusses recent advances in the simulation of spectral scenes with partial cloud cover. We examine the effect
of broken cloud fields on the solar illumination reaching the ground. Application of aerosol retrieval techniques in the
vicinity of broken clouds leads to significant over-prediction of aerosol optical depth because of the enhancement of
visible illumination due to scattering of photons from clouds into clear patches. These illumination enhancement effects
are simulated for a variety of broken cloud fields using the MCScene code, a high fidelity model for full optical
spectrum (UV through LWIR) spectral image simulation. MCScene provides an accurate, robust, and efficient means to
generate spectral scenes for algorithm validation. MCScene utilizes a Direct Simulation Monte Carlo approach for
modeling 3D atmospheric radiative transfer (RT), including full treatment of molecular absorption and Rayleigh
scattering, aerosol absorption and scattering, and multiple scattering and adjacency effects, as well as scattering from
spatially inhomogeneous surfaces.
The MCScene code, a high fidelity model for full optical spectrum (UV to LWIR) spectral image simulation, will be
discussed and its features illustrated with sample calculations. The MCScene simulation is based on a Direct Simulation
Monte Carlo approach for modeling 3D atmospheric radiative transport, as well as spatially inhomogeneous surfaces
including surface BRDF effects. The model includes treatment of land and ocean surfaces, 3D terrain, 3D surface
objects, and effects of finite clouds with surface shadowing. This paper will review the more recent upgrades to the
model including the development of an approach for incorporating direct and scattered thermal emission predictions into
the MCScene simulations. Sample calculations presented in the paper include a full optical spectrum simulation from the
visible to the LWIR for a desert scene under a broken cloud field. This scene was derived from an AVIRIS visible to
SWIR spectral imaging data collect over the Virgin Mountains in Nevada. The data has been extrapolated to the thermal
IR. Other calculations include complex 3D clouds over urban and rural terrain.
KEYWORDS: Clouds, Monte Carlo methods, Atmospheric modeling, Photons, 3D modeling, Scattering, Aerosols, Reflectivity, Scene simulation, Atmospheric particles
This paper discusses the effects of broken cloud fields on solar illumination reaching the ground. Application of aerosol
retrieval techniques in the vicinity of broken clouds leads to significant over prediction of aerosol optical depth because
of the enhancement of visible illumination from the scattering of photons from clouds into clear patches. These
illumination enhancement effects are simulated for a variety of broken cloud fields using the MCScene code, a high
fidelity model for full optical spectrum (UV through LWIR) spectral image simulation. MCScene provides an accurate,
robust, and efficient means to generate spectral scenes for algorithm validation. MCScene utilizes a Direct Simulation
Monte Carlo approach for modeling 3D atmospheric radiative transfer (RT), including full treatment of molecular
absorption and Rayleigh scattering, aerosol absorption and scattering, and multiple scattering and adjacency effects, as
well as scattering from spatially inhomogeneous surfaces. The model includes treatment of land and ocean surfaces, 3D
terrain, 3D surface objects, and effects of finite clouds with surface shadowing. The paper includes an overview of the
MCScene code and a series of calculations for broken 3D cloud fields demonstrating the effects of clouds on downwelling
flux.
The QUAC (Quick Atmospheric Correction) algorithm for in-scene-based atmospheric correction of VIS-SWIR
(VISible-Short Wave InfraRed) Multi- and Hyperspectral Imagery (MSI and HSI) is reviewed and applied to
radiometrically uncalibrated data. Quite good agreement was previously demonstrated for the retrieved pixel spectral
reflectances between QUAC and the physics-based atmospheric correction code FLAASH (Fast Line-of-sight
Atmospheric Analysis of Spectral Hypercubes) for a variety of HSI and MSI data cubes. In these code-to-code
comparisons, all the data cubes were obtained with well-calibrated sensors. However, many sensors operate in an
uncalibrated manner, precluding the use of physics-based codes to retrieve surface reflectance. The ability to retrieve
absolute spectral reflectances from such sensors would significantly increase the utility of their data. We apply QUAC
to calibrated and uncalibrated versions of the same Landsat MSI data cube, and demonstrate nearly identical retrieved
spectral reflectances for the two data sets.
This paper will discuss recent improvements made to the MCScene code, a high fidelity model for full optical spectrum
(UV through LWIR) hyperspectral image (HSI) simulation. MCScene provides an accurate, robust, and efficient means
to generate HSI scenes for algorithm validation. MCScene utilizes a Direct Simulation Monte Carlo approach for
modeling 3D atmospheric radiative transfer (RT) including full treatment of molecular absorption and Rayleigh
scattering, aerosol absorption and scattering, and multiple scattering and adjacency effects, as well as scattering from
spatially inhomogeneous surfaces, including surface BRDF effects. The model includes treatment of land and ocean
surfaces, 3D terrain, 3D surface objects, and effects of finite clouds with surface shadowing. This paper will provide an
overview of how RT elements are incorporated into the Monte Carlo engine. Several new examples of the capabilities
of MCScene to simulate 3-dimensional cloud fields will also be discussed, and sample calculations will be presented.
Subspace methods for hyperspectral imagery enable detection and identification of targets under unknown
environmental conditions (i.e., atmospheric, illumination, surface temperature, etc.) by specifying a subspace of possible
target spectral signatures (and, optionally, a background subspace) and identifying closely fitting spectra in the image.
The subspaces, defined from a set of exemplar spectra, are compactly expanded in singular value decomposition basis
vectors or, less commonly, endmember basis spectra, linear combinations of which are used to fit the image data. In the
present study we compared detection performance in the thermal infrared using several different constrained and
unconstrained basis set expansions of low-dimensional subspaces, including a method based on the Sequential
Maximum Angle Convex Cone (SMACC) endmember algorithm. Constrained expansions were found to provide a
modest improvement in algorithm robustness in our test cases.
Compared to nadir viewing, off-nadir viewing of the ground from a high-altitude platform provides opportunities to increase area coverage and to reduce revisit times, although at the expense of spatial resolution. In this study, the ability to atmospherically compensate off-nadir hyperspectral imagery taken from a space platform was evaluated for a worst-case viewing geometry, using EO-1 Hyperion data collected with an off-nadir angle of 63° at the sensor, corresponding to six air masses along the line of sight. Reasonable reflectance spectra were obtained using both
first-principles (FLAASH) and empirical (QUAC)
atmospheric-compensation methods. Some refinements to FLAASH that enable visibility retrievals with highly off-nadir imagery, and also improve accuracy in nadir viewing, were developed and are described.
We describe improvements to a recently developed VNIR-SWIR atmospheric correction method for hyper- and multispectral imagery, dubbed QUAC (QUick Atmospheric Correction). It determines the atmospheric compensation parameters directly from the information contained within the scene using the observed pixel spectra. The newest implementation of QUAC is based on the assumption that the average reflectance of a collection of diverse material spectra, such as the endmember spectra in a scene, is effectively scene independent. This enables the retrieval of reasonably accurate reflectance spectra even when the sensor does not have a proper radiometric or wavelength calibration, or when the solar illumination intensity is unknown. The computational speed of the atmospheric correction method is significantly faster than for the first-principles methods, making it potentially suitable for real-time applications on aircraft and spacecraft. QUAC is applied to a diverse collection of hyper- and multispectral data sets and the results are compared to those obtained with the physics-based atmospheric correction code FLAASH (Fast Line of sight Atmospheric Analysis of Spectral Hypercubes).
KEYWORDS: Reflectivity, Data modeling, Atmospheric modeling, Monte Carlo methods, Correlation function, Scene simulation, 3D image processing, Sensors, Hyperspectral simulation, Atmospheric particles
A method for extracting statistics from hyperspectral data and generating synthetic scenes suitable for scene generation models is presented. Regions composed of a general surface type with a small intrinsic variation, such as a forest or crop field, are selected. The spectra are decomposed using a basis set derived from spectra present in the scene and the abundances of the basis members in each pixel spectrum found. Statistics such as the abundance means, covariances and channel variances are extracted. The scenes are synthesized using a coloring transform with the abundance covariance matrix. The pixel-to-pixel spatial correlations are modeled by an autoregressive moving average texture generation technique. Synthetic reflectance cubes are constructed using the generated abundance maps, the basis set and the channel variances. Enhancements include removing any pattern from the scene and reducing the skewness. This technique is designed to work on atmospherically-compensated data in any spectral region, including the visible-shortwave infrared HYDICE and AVIRIS data presented here. Methods to evaluate the performance of this approach for generating scene textures include comparing the statistics of the synthetic surfaces and the original data, using a signal-to-clutter ratio metric, and inserting sub-pixel spectral signatures into scenes for detection using spectral matched filters.
The QUick Image Display (QUID) model accurately computes and displays radiance images of aircraft and other
objects, generically called targets, at animation rates while the target undergoes unrestricted flight. Animation rates are
obtained without sacrificing radiometric accuracy by using two important innovations. First, QUID has been
implemented using the Open Scene Graph (OSG) library, an open-source, cross-platform 3-D graphics toolkit for the
development of high performance graphics applications in the fields of visual simulation, virtual reality, scientific
visualization and modeling. Written entirely in standard C++ and fully encapsulating OpenGL and its extensions, OSG
exploits modern graphics hardware to perform the computationally intensive calculations such as hidden surface
removal, 3-D transformations, and shadow casting. Second, a novel formulation for reflective/emissive terms enables
rapid and accurate calculation of per-vertex radiance. The bi-directional reflectance distribution function (BRDF) is a
decomposed into separable spectral and angular functions. The spectral terms can be pre-calculated for a user specified
band pass and for a set of target-observer ranges. The only BRDF calculations which must be performed during target
motion involves the observer-target-source angular functions. QUID supports a variety of target geometry files and is
capable of rendering scenes containing high level-of-detail targets with thousands of facets. QUID generates accurate
visible to LWIR radiance maps, in-band and spectral signatures. The newest features of QUID are illustrated with
radiance and apparent temperature images of threat missiles as viewed by an aircraft missile warning system.
KEYWORDS: Monte Carlo methods, 3D modeling, Hyperspectral simulation, Atmospheric modeling, Clouds, Reflectivity, Sensors, Photons, RGB color model, Computer simulations
This paper discusses the formulation and implementation of an acceleration approach for the MCScene code, a high
fidelity model for full optical spectrum (UV to LWIR) hyperspectral image (HSI) simulation. The MCScene simulation
is based on a Direct Simulation Monte Carlo approach for modeling 3D atmospheric radiative transport, as well as
spatially inhomogeneous surfaces including surface BRDF effects. The model includes treatment of land and ocean
surfaces, 3D terrain, 3D surface objects, and effects of finite clouds with surface shadowing. This paper will review an
acceleration algorithm that exploits spectral redundancies in hyperspectral images. In this algorithm, the full scene is
determined for a subset of spectral channels, and then this multispectral scene is unmixed into spectral end members and
end member abundance maps. Next, pure end member pixels are determined at their full hyperspectral resolution, and
the full hyperspectral scene is reconstructed from the hyperspectral end member spectra and the multispectral abundance
maps. This algorithm effectively performs a hyperspectral simulation while requiring only the computational time of a
multispectral simulation. The acceleration algorithm will be demonstrated, and errors associated with the algorithm will
be analyzed.
We describe a new visible-near infrared short-wavelength infrared (VNIR-SWIR) atmospheric correction method for multi- and hyperspectral imagery, dubbed QUAC (QUick Atmospheric Correction) that also enables retrieval of the wavelength-dependent optical depth of the aerosol or haze and molecular absorbers. It determines the atmospheric compensation parameters directly from the information contained within the scene using the observed pixel spectra. The approach is based on the empirical finding that the spectral standard deviation of a collection of diverse material spectra, such as the endmember spectra in a scene, is essentially spectrally flat. It allows the retrieval of reasonably accurate reflectance spectra even when the sensor does not have a proper radiometric or wavelength calibration, or when the solar illumination intensity is unknown. The computational speed of the atmospheric correction method is significantly faster than for the first-principles methods, making it potentially suitable for real-time applications. The aerosol optical depth retrieval method, unlike most prior methods, does not require the presence of dark pixels. QUAC is applied to atmospherically correction several AVIRIS data sets and a Landsat-7 data set, as well as to simulated HyMap data for a wide variety of atmospheric conditions. Comparisons to the physics-based Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) code are also presented.
KEYWORDS: Atmospheric modeling, Sensors, Reflectivity, Monte Carlo methods, Atmospheric sensing, Thermography, Algorithm development, 3D modeling, Infrared radiation, Long wavelength infrared
This paper demonstrates the use of a high fidelity hyperspectral scene simulation tool, called MCScene, to generate realistic thermal infrared scenes that can be used for algorithm development efforts, such as gas plume detection algorithms. MCScene is based on a Direct Simulation Monte Carlo (DSMC) approach for modeling 3D atmospheric
radiative transport, as well as spatially inhomogeneous surfaces including surface BRDF effects. Synthetic “groundtruth” is specified as surface and atmospheric property inputs, and it is practical to consider wide variations of these properties. The model includes treatment of land and ocean surfaces, 3D terrain and bathymetry, 3D surface objects, and effects of finite clouds with surface shadowing. The computed hyperspectral data cubes can supplement field validation data for algorithm development. Sample calculations presented in this paper include a thermal infrared simulation for a
desert scene that includes a gas plume produced by an industrial complex. This scene was derived from an AVIRIS visible to SWIR HSI data collect over the Virgin Mountains in Nevada. The data has been extrapolated to the thermal IR and a representative industrial site and plume have been added to the scene.
KEYWORDS: Monte Carlo methods, Atmospheric modeling, 3D modeling, Clouds, Reflectivity, Photons, Hyperspectral simulation, Scattering, Long wavelength infrared, Data modeling
The MCScene code, a high fidelity model for full optical spectrum (UV to LWIR) hyperspectral image (HSI) simulation, will be discussed and its features illustrated with sample calculations. MCScene is based on a Direct Simulation Monte Carlo approach for modeling 3D atmospheric radiative transport, as well as spatially inhomogeneous surfaces including surface BRDF effects. The model includes treatment of land and water surfaces, 3D terrain, 3D surface objects, and effects of finite clouds with surface shadowing. This paper will review the more recent upgrades to the model, including the development of an approach for incorporating direct and scattered thermal emission predictions into the MCScene simulations. Calculations presented in the paper include a full optical spectrum simulation from the visible to the LWIR for a desert scene. This scene was derived from an AVIRIS visible to SWIR HSI data collect over the Virgin Mountains in Nevada, extrapolated to the thermal IR. Other calculations include complex 3D clouds over urban and rural terrain.
KEYWORDS: Monte Carlo methods, Reflectivity, 3D modeling, Atmospheric modeling, Photons, Clouds, Sensors, Scene simulation, Hyperspectral simulation, RGB color model
Spectral Sciences, Inc., in collaboration with NASA and AFRL, are developing a high fidelity model for hyperspectral image (HSI) simulation. The simulation is based on a Direct Simulation Monte Carlo (DSMC) approach for modeling topographic effects. Synthetic “ground-truth” is specified as surface and atmospheric property inputs, and it is practical to consider wide variations of these properties. The model includes treatment of land and ocean surfaces, 3D terrain and bathymetry, 3D surface objects, and effects of finite clouds with surface shadowing. The computed HSI data cubes can serve as both a surrogate for and a supplement to field validation data for algorithm development efforts or for sensor design trade-studies. The initial version of the software package developed in collaboration with NASA treated the reflective spectral domain from the visible to the SWIR. In this paper, we review the reflective spectral domain model and present our approach for extending the HSI scene simulation package into the thermal infrared. The model is demonstrated with a variety of Visible and LWIR scene simulations.
Hyperspectral systems are increasingly being mated with on-board target detection algorithms. However the only way to test these algorithms is with field testing which are expensive and inherently unrepeatable. This paper will describe a Hyperspectral Scene Generator that can display hundreds of programmable high resolution spectra simultaneously. This allows a target to be inserted into a previously measured field for testing of a hyperspectral sensor and target detection algorithms in the lab. The design of the Hyperspectral Scene Generator is presently applied to the Visible and Near InfraRed (VNIR) and Short Wave InfraRed (SWIR) but may also be applied to the MidWave InfraRed (MWIR) and Long Wave InfraRed (LWIR) spectral region. Funding for this study is provided from Office of the Secretary of Defense and Director, Operational Test and Evaluation (DOT&E) to investigate the development of a hyperspectral scene generator that will have broad application to many hyperspectral systems.
A new end-member analysis method based on convex cones has been developed. The method finds extreme points in a convex set. Unlike convex methods that rely on a simplex, the number of end-members is not restricted by the number of spectral channels. The algorithm simultaneously finds fractional abundance maps. The fractional abundances are the fractions of the total spectrally integrated radiance of a pixel that are contributed by the end-members. A physical model of the hyper-spectral or multi-spectral scene is obtained by combining subsets of the end-members into bundles of spectra for each scene material. The bundle spectra represent the spectral variability of the material in the scene induced by illumination, shadowing, weathering and other environmental effects. The method offers advantages in multi-spectral data sets where the limited number of channels impairs material un-mixing by standard techniques. The method can also be applied to compress hyper-spectral data. The fractional abundance matrices are sparse and offer an additional compression capability over standard matrix factorization techniques. A description of the method and applications to real and synthetic hyper-spectral and multi-spectral data sets will be presented.
KEYWORDS: Atmospheric modeling, Reflectivity, Target detection, Monte Carlo methods, Data modeling, Scene simulation, Atmospheric particles, Detection and tracking algorithms, Algorithm development, Visible radiation
A method for the extraction of spectral and spatial scene statistics from hyperspectral data is discussed. The method is designed to work on atmospherically compensated data in any spectral region, although this paper will report on visible scene statistics derived from atmospherically compensated AVIRIS data. Our approach is based on a physical description where the scene is composed of materials that in turn are described by a set of spectral endmembers. The spatial statistics of individual scene materials have more stationary behavior than the statistics for the whole scene. For this reason we have formulated our approach around statistics that are determined from the fractional abundance images obtained from the spectral un-mixing of the scene. These quantities are used to construct a high spatial resolution reflectance or emissivity/temperature surface using a fast autoregressive texture generation tool. The spectral complexity of the synthetic surfaces have been evaluated by inserting objects for detection and calculating ROC curves. Preliminary results indicate that synthetic scenes with realistic levels of spectral clutter can be generated using spectral and spatial statistics determined from endmember fractional abundance maps. This work is motivated by the need for realistic hyperspectral scene generation capabilities to test future hyperspectral sensor concepts.
KEYWORDS: Atmospheric modeling, Atmospheric particles, Reflectivity, Sensors, Monte Carlo methods, Data modeling, Scene simulation, Correlation function, Image processing, Software
A method for the extraction of spectral and spatial scene statistics from hyperspectral data is discussed. The method is designed to work on atmospherically compensated data in the visible/SWIR or the Thermal IR (TIR). The statistics are determined from the fractional abundance images obtained from spectral un-mixing of the scene. The statistical quantities that are extracted include endmember abundance means, variances, and correlation lengths. These quantities are used to construct a high spatial resolution reflectance or emissivity/temperature surface using a fast autoregressive texture generation tool. The spectral complexity of the synthetic surfaces have been evaluated by inserting objects for detection and calculating ROC curves. Preliminary results indicate that synthetic scenes with realistic levels of spectral clutter can be generated using spectral and spatial statistics determined from endmember fractional abundance maps. This work is motivated by the need for realistic hyperspectral scene generation capabilities to test future hyperspectral sensor concepts.
KEYWORDS: Sensors, Signal processing, Imaging spectroscopy, Reflectivity, Image sensors, Remote sensing, Signal to noise ratio, Data compression, Remote sensing system design, Optical resolution
A method of optimizing the selection of spectral channels in a spectral-spatial remote sensor has been developed that is applicable to the design of multispectral, hyperspectral and ultra spectral resolution sensors. The approach is based on an end member analysis technique that has been refined to select the most information dense channels. The algorithm operates sequentially and at any step in the sequence, the channel selected is the most independent form all previously selected channels. After the channel selection process, highly correlated channels, which are contiguous to those selected, can be merged to form bands. This process increases the signal to noise for the new broader spectral bands. The resulting bands, potentially of unequal width and spacing, collect the most uncorrelated spectral information present in the data. The band selection provides a physical interpretation of the data and has applications in spectral feature selection and data compression.
The Quick Image Display (QUID) model accurately computes and displays radiance images at animation rates while the target undergoes unrestricted flight or motion. QUID uses a novel formulation for reflective/emissive terms which enables rapid and accurate target image rendering. The fundamental quantity which enters into the determination of reflected radiation is the bi-directional reflectance distribution function (BRDF). QUID's BRDF formulation involves decomposition of the BRDF into a generalized sum of product terms. Each product term is factored into separable spectral and angular functions. The spectral terms can be pre- calculated for the user specified bandpass and for a set of target observer ranges. The only BRDF calculations which must be performed during the simulation involves the observer-target-source angular functions which change with target orientation. Reflected solar radiation can dominate the apparent target signature of aircraft in daytime scenes in the MWIR/SWIR spectral regions. To accurately simulate reflections from curved surfaces either a large number of small flat facets must be used or some type of pseudo curvature technique. Both of these approaches tend to significantly slow down scene rendering. The main thrust of this effort is to find rapid techniques for accurate specular glint rendering on curved surfaces.
A technique has been developed to estimate bounds on the spectra of major constituents of multispectral images. The bounds are two distinct sets of spectra, one in which the spectra are maximally independent from one another and another set in which the spectra a re minimally independent. Both sets and their corresponding estimated abundance maps satisfy feasibility constraints for both spectral elements and fractional abundances. The actual spectra will have an independence measure between the minimal and maximum sets. An approach to mapping the feasibility region for all intermediate independence measures is described. In general, for a given level of independence here is an infinity of rotation axes about which small rotations of the spectra leads to another feasible set. In our approach, the selected rotation axes is the one which takes the maximally independent basis into the minimally independent basis. The effects of noise and low levels of additional components are expected to have a larger effect on altering the spectra than the modifications due to small arbitrary rotations of feasible spectra. The technique is illustrated by application to a computer generated multispectral data array.
The Quick Image Display (QUID) model accurately computes and displays radiance images at animation rates while the target undergoes unrestricted flight or motion. Animation rates are obtained without sacrificing radiometric accuracy by using three important innovations. First, QUID has been implemented using the OpenGL graphics library which utilizes available graphics hardware to perform the computationally intensive hidden surface calculations. Second, the thermal emission and reflectance calculations are factorized into angular and wavelength dependent terms for computational efficiency. Third, QUID uses OpenGL supported texture mapping to simulate pseudo curved surface reflectance. The size of the glint texture is controlled by paint/surface properties and the surface normals at the facet's vertices. QUID generates IR radiance maps, in-band and spectral signatures for high level of detail targets with thousands of facets. Model features are illustrated with radiance and radiance contrast images of aircraft.
Atmospheric infrared radiance fluctuations result from fluctuations in the density of atmospheric species, individual molecular state populations, and kinetic temperatures and pressures along the sensor line of sight (LOS). The SHARC-4 program models the atmospheric background radiance fluctuations. It predicts a two dimensional radiance spatial covariance function from the underlying 3D atmospheric structures. The radiance statistics are non-stationary and are dependent on bandpass, sensor location and field of view (FOV). In the upper atmosphere non-equilibrium effects are important. Fluctuations in kinetic temperature can result in correlated or anti-correlated fluctuations in vibrational state temperatures. The model accounts for these effects and predicts spatial covariance functions for molecular state number densities and vibrational temperatures. SHARC predicts the non-equilibrium dependence of molecular state number density fluctuations on kinetic temperature and density fluctuations, and calculates mean LOS radiances and radiance derivatives. The modeling capabilities are illustrated with sample predictions of MSX like experiments with MSX sensor bandpasses, sensor locations and FOV. The model can be applied for all altitudes and arbitrary sensor FOV including nadir and limb viewing.
This paper describes the development of a new version of the SHARC code, SHARC-3, which includes the ability to simulate changing atmospheric conditions along the line-of-sight (LOS) paths being calculated. SHARC has been developed by the U.S. Air Force for the rapid and accurate calculation of upper atmospheric IR radiance and transmittance spectra with a resolution of better than 1 cm-1 in the 2 to 40 micrometers (250 to 5,000 cm-1) wavelength region for arbitrary LOSs in the 50 - 300 km altitude regime. SHARC accounts for the production, loss, and energy transfer processes among the molecular vibrational states important to this spectral region. Auroral production and excitation of CO2, NO, and NO+ are included in addition to quiescent atmospheric processes. Calculated vibrational temperatures are found to be similar to results from other non-LTE codes, and SHARC's equivalent-width spectral algorithm provides very good agreement with much more time-consuming `exact' line-by-line methods. Calculations and data comparisons illustrating the features of SHARC-3 are presented.
Simulation of infrared radiance fluctuations in the atmosphere depends on detailed descriptions of fluctuations in atmospheric species number densities, vibrational state populations, and the kinetic temperatures along the sensor line-of-sight. The relationship between kinetic and vibrational temperature fluctuations depends on the subtle interplay between changes in the total number densities, changes in the temperature-dependent kinetic rates, and the relative contribution of the radiative relaxation. The model developed in this paper predicts the two- dimensional radiance covariance function for nonequilibrium effect conditions. The radiance statistics are non-stationary and are explicitly bandpass and sensor FOV dependent. The SHARC model is used to calculate mean LOS radiance values and radiance derivatives which are necessary to determine the radiance statistics. Inputs to the model include the statistical parameters of a non-stationary atmospheric temperature fluctuation model and an atmospheric profile. The radiance statistics are used in a simple model for synthesizing images. The model has been applied to calculate the radiance structure for the OH((Delta) v equals 1) SWIR band and the CO2((nu) 3) MWIR band under nighttime conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.