PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper is concerned with suppressing multiple wide-bandwidth radio frequency interference (RFI) sources in SAR systems. A coherent processing of passive radar (sniff) data is presented to diminish the effects of wide-bandwidth as well as narrow-bandwidth RFI sources in the active radar data that are collected by a SAR system. The approach is based on a two-dimensional adaptive filtering of the active SAR data using the passive sniff data as the reference signal. A similar mathematical (signal) model and processing is also utilized to suppress self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. Results are shown using the Army Research Laboratory (ARL) low-frequency, ultra-wideband (UWB) imaging radar (Boom-SAR).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The anntenna area of distributed micosateliites radar system is usually smaller than the minimum SAR antenna area constraint, and there are range-Doppler ambiguities. So one key focus of this signal processing research is to develop processing approaches that exploit the added degrees of freedom of a spatially diverse formation to resolve inherent ambiguities of this sparse aperture distributed micosateliites radar system. In the paper, a beamform processing method of three-dimensional sparse arrays is proposed to spatially null Doppler ambiguities. This method first filters out each unambiguous frequency point from ambiguous Doppler channels of a few different phase centers by using spatial filter, this processing can be regard as a space-time processing, then combines all unambiguous frequency point to a whole unambiguous Doppler band, after that, does imaging processing. Theoretical derivation, performance analysis, and simulation of this method are discussed in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reviews recent and ongoing antenna technology and systems development in the Special Projects Office of the Defense Advanced Research Projects Agency (DARPA/SPO). These programs fall into two categories: development and application of antenna component technologies and development of transportable phased-array radar antennas. These development programs are presented in a chronological order.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper develops a method for forming a synthetic-aperture image of a flat surface seen through a homogeneous layer of a material that is dispersive, i.e., its wave speed varies with frequency.
We outline first a simplified scalar model for electromagnetic wave propagation in a dispersive medium; the resulting equation could also be used for acoustics. We show that the backscattered signal can be viewed as a Fourier integral operator applied to the ground reflectivity funciton. The reconstruction method, which is based on backprojection, can be used for arbitrary sensor paths and corrects for the radiated beam pattern, the source waveform, and geometrical spreading factors. The method correctly reconstructs the singularities (such as edges) that are visible from the sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a new concept for air-to-ground noise radar based on adaptive filtering. A transmitting antenna illuminates a region of interest with a continuous, noise waveform. The processor within the receiver treats the illuminated scene as a linear system with unknown coefficients which filters the transmitted signal. Given access to the transmitted waveform and the digitized backscattered signal, the receiver adaptively estimates the unknown filter coefficients, using the same processing architecture as a wireless channel equalizer, and continues to update their values as the transmitter and receiver traverse their flight paths. The
adapted filters correspond to range profiles of the illuminated scene which may be Doppler processed to yield synthetic aperture imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spatial reconstruction of a rigid, moving target's scattering centers using one dimensional, high range resolution (HRR) radar remains of high interest to synthetic aperture radar (SAR) processing of moving targets. Innovative range and Doppler equations for a rotating target with constant angular velocity were developed by Fazio, Hong, and Wood and presented at the April 2002 SPIE AeroSense Conference in Orlando, Florida. Further research has produced a method of reconstructing a three-dimensional scattering center model of a moving target with variable angular velocity. The reconstruction algorithm uses the relative ranges from a minimum of five observations of three scattering centers. In-plane rotational motion provides necessary information for positioning the projection of the scattering centers onto the observation (reconstruction) plane; while out-of-rotational-plane target motion is necessary to locate the center above or below the reconstruction plane.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Spotlight-Mode Synthetic Aperture Radar: A Signal Processing Approach, the authors describe three fundamental angles of SAR imaging: grazing, tilt, and slope. This paper extends the concept of fundamental angles used in SAR ground-mapping and apply it to the problem of imaging a moving target. A method for projecting target scattering centers of a moving target in range-doppler image coordinates will be shown. This ability should naturally result in increased performance by a model-based ATR system. Concepts are demonstrated using both a simulated array of targets and XPATCH scattering centers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is useful in applications of imaging moving vehicles to view the relative motion from a "target-centric" viewpoint. It was shown in a previous paper by the same authors that using "novel" definitions of the SAR fundamental angles and viewing the problem from a target-centric perspective allows one to determine how a moving target is projected into the range-doppler plane. This paper develops a clear and concise mathematical "connection" from the source of the motion estimation error to the projection of the target scattering centers into the range-doppler plane. It is already known that making accurate scatterer predictions is essential to the performance of a model-based ATR (MBATR) subsystem. Therefore, the ability to make better predictions of scatterer locations should naturally lead to improved MBATR performance against moving targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SAR interferometry is a technique used to reconstruct detailed terrain height maps. The technique requires two SAR images of the same patch of ground. In order for the interferometric process to succeed, the imaging collection geometry must be within tightly held constraints. The two images are registered and the phases are compared using a 2-dimensional averaging box of several pixels. This phase difference is then proportional to the terrain height at the location of the center of the averaging box.
This local averaging process is important because it reduces phase-noise and subsequently produces better height maps. The process, however, assumes that the phase difference is constant over the averaging box. In areas where steep slopes exist, this assumption is violated and the resulting phase difference measurements are in error, resulting in corrupted height maps.
This paper presents a technique, which extends the model fo the phase in the averaging box to allow a 2-dimensional linear phase slope to exist. The process estimates the constant phase (the phase that is a measure of the local terrain height) and the phase slope (which is a measure of the terrain slope) in an individual averaging box. Extending the model to include the linear phase slopes greatly improves the constant phase difference estimates, especially in areas of steep terrain. This results in muc more accurate and reliable terrain products.
This paper presents a technique, which extends the model of the phase in
the averaging box to allow a 2-dimensional linear phase slope
to exist. The process estimates the constant phase (the phase that is
a measure of the local terrain height) and the phase slope (which is a
measure of the terrain slope) in an individual averaging box. Extending
the model to include the linear phase slopes greatly improves the constant
phase difference estimates, especially in areas of steep terrain. This
results in much more accurate and reliable
terrain products.
This paper demonstrates the viability of the technique on actual SAR data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper compares three algorithms for potential use in a real-time, on-board implementation of spotlight-mode SAR image formation. These include: the polar formatting algorithm (PFA), the range migration algorithm (RMA), and the overlapped subapertures algorithm (OSA). We conclude that for any reasonable spotlight-mode imaging scenario, PFA is easily the algorithm of choice because its computational efficiency is significantly higher than that of either RMA or OSA. This comparison specifically includes cases in which wavefront curvature is sufficient to cause image defocus in conventional PFA, because a post-processing refocus step can be performed with PFA to yield excellent image quality for only a minimal increase in computation time. We demonstrate that real-time image formation for many imaging scenarios is achievable using PFA implemented on a single Pentium M processor. OSA is quite slow compared to PFA, especially for the case of moderate to high resolution (9 inches and better). RMA is not competitive with PFA for situations that do not require wavefront curvature correction.
For those cases in which PFA requires post-processing to correct for wavefront curvature, RMA comes closer in efficiency to PFA, but is still outperformed by the modified PFA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider nonparametric complex spectral estimation using an adaptive filtering based approach where the finite impulse response (FIR) filter-bank is obtained via a rank-deficient robust Capon beamformer. We show that by allowing the sample covariance matrix to be rank-deficient, we can achieve much higher resolution than existing approaches, which is useful in many applications including radar target detection and feature extraction. Numerical examples are provided to demonstrate the performance of the new approach as compared to existing data-adaptive and data-independent FIR filtering based spectral estimation methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To many contemporary radar engineers the term "radar imaging" has come to be synonymous with "scattering center localization." Radar image-based target classification and identification, for example, is typically interpreted as a library look-up process in which the position and strength of target scattering centers is matched to a set of known template signatures. But the ability to accurately estimate scatterer position and strength is severely hampered by low image resolution and noise contamination. In addition, inverse synthetic aperture radar images often also require costly preprocessing steps (such as polar reformatting) to assure adequate accuracy. We describe a simple method, based on subspace fitting techniques, that can be applied to the position and strength estimation problem in this time-constrained and data-limited environment. The scheme is robust against noise corruption and allows for super-resolved estimates of all (or some) of the scatterers. Examples based on both real and synthetic data are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a numerical Bayesian approach to the autofocus and super-resolution of targets in radar imagery. An ill-posed inverse problem is studied in which the known linear imaging operator is subject to an unknown degree of distortion (defocusing). The goal is simultaneously to reconstruct a high-resolution representation of a target based on noisy lower resolution image measurements and to estimate the degree of defocus. We present a Markov chain Monte Carlo algorithm for parameter estimation, illustrate the approach on an explanatory example and compare our technique with a maximum likelihood approach. Given a model for the sensor measurement process, this technique may be applied to any type of radar image such as those produced by a synthetic aperture radar (SAR), inverse SAR (ISAR) or a real beam imaging radar. The proposed approach fits into a larger set of procedures aiming to exploit targeting information from different radar sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated aids for SAR image interpretation are required to address both the 'data deluge' problem in surveillance applications and the 'cockpit load' problem in targeting applications. With resolutions becoming comparable to the radar wavelength, the interactions of scattering events on hard targets result in a signal which is very difficult to interpret especially simply on the basis of the signal amplitude. For this reason, there has been a significant move towards complex image analysis, spearheaded by Rihaczek [Rihaczek and Hershkowitz, Radar Resolution and Complex-Image Analysis, Artech House, 1996], who has presented strong evidence to suggest that use of the image phase is the key to understanding the complicated scatterer interactions and is hence the key to target recognition. He has introduced a 'two-scatterer algorithm' which attempts to unravel the complex signal to determine the presence of two closely spaced scatterers. He also hypothesises that the approach may be extended to multiple scatterers. In this paper, an alternative to the 'two-scatterer algorithm' of Rihaczek has been introduced which is based on a statistical model and a fitting procedure. This has allowed a theoretical analysis of the errors in estimation of scatterer parameters to be performed thus leading to an alternative definition of resolution which is applicable to complex image analysis. The new definition overcomes the inability of the traditional definition of resolution to describe the varying degree to which equidistant in-phase and out-of-phase scatterers can be distinguished. The resolution limits appear naturally from this error analysis as the result of the errors increasing rapidly as the two scatterers approach each other. A generalisation to multiple scatterers arises naturally from the formulation which offers the prospect of significantly extending the applicability of complex image analysis to the understanding of scattering from hard targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper extends simulation and target detection results from an investigation entitled "Self-Training Algorithms for Ultra-wideband SAR Target Detection" that was conducted last year and presented at the 2003 SPIE Aerosense Conference on "Algorithms for Synthetic Aperture Radar Imagery." Under this approach, simulated SAR impulse clutter data was generated by modulating a tophat model for the SAR video phase history with K-distributed data models. Targets were synthesized and "instanced" within the SAR image via the application of a dihedral model to represent broadside targets. For this paper, these models are extended and generalized by developing a set of models that approximate major scattering mechanisms due to terrain relief and approximate major scattering mechanisms due to scattering from off-angle targets. Off-angle targets are difficult to detect at typical ultra-wideband radar frequencies and are denoted as "diffuse scatterers." Potential approaches for detecting synthetic off-angle targets that demonstrate this type of "diffuse scattering" are developed and described in the algorithms and results section of the paper. A preliminary set of analysis outputs are presented with synthetic data from the resulting simulation testbed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider imaging strategies for synthetic aperture radar data
collections that span a wide angular aperture. Most traditional
radar imaging techniques are predicated on the assumption of
isotropic point scattering mechanisms, which does not hold for
wide apertures. We investigate point scattering center images for
narrowband, wide angle data, and consider the effect of limited
persistence on the resulting images. We investigate imaging
strategies that apply to wide angle apertures. We show that
coherent processing of the entire wide angle aperture may not be
the best image formation strategy for objects of practical
interest. Finally, we present initial results on resolution
enhancement techniques for wide angle apertures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the application of a recently-developed region-enhanced
synthetic aperture radar (SAR) image reconstruction technique to the
problem of passive radar imaging. One goal in passive radar imaging is
to form images of aircraft using signals transmitted by commercial
radio and television stations, which then get reflected from the
objects of interest. This involves reconstructing an image from sparse
samples of its Fourier transform. Due to the sparse nature of the
aperture, a conventional image formation approach based on direct
Fourier transformation results in quite dramatic artifacts in the
image, as compared to the case of active SAR imaging. The
region-enhanced image formation method we consider appears to
significantly reduce such artifacts, and preserve the features of the
imaged object. Furthermore, this approach exhibits robustness to
measurement noise. We demonstrate our results using data based on
electromagnetic simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique to form super-resolved 3D Synthetic Aperture Radar (SAR) images from a limited number of elevation passes is presented in this paper. This technique models the environment as containing a finite number of isotropically radiating, frequency independent point scatterers in Additive White Gaussian Noise (AWGN), and applies a hybrid super-resolution method that yields the Maximum Likelihood (ML) estimates of scatterer strengths and resolves their locations in the data deficient dimension well beyond the Fourier resolution limit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual-D is a 2004 DARPA/IXO seedling effort that would develop a capability for reliable high confidence ID from standoff ranges. Being able to form optical-quality SAR images (exploiting full polarization, wide angle, etc) would key evidence that such a capability is achievable. The seedling team produced a public release data set and associated challenge problems to support community research in this area. The premise of this paper is to describe the full data set and 3 associated challenge problems that are defined over interesting subsets of the full data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses development of physics-based models for bistatic scattering. We generalize parametric equations for monostatic scattering mechanisms in a plane to achieve analogous bistatic approximations. Combination of these mechanisms, as separable azimuth and elevation components, allows 3-D modelling of six scattering primitives: sphere, tophat, trihedral, dihedral, cylinder, and flat plate. The responses of these scattering center models are shown to compare favorably with results obtained from validated high-frequency simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two bistatic SAR image quality metrics are postulated. The combination of the two metrics provide a quick and computationally non-intensive means of predicting bistatic SAR image quality as a function of collection geometry. The metrics are based on local orthogonality criteria. By noting the fact that all SAR imaging techniques essentially map the downrange and Doppler frequency onto the image plane, it is observed that the downrange and crossrange direction vectors can be obtained via the gradients of the isorange and isoDoppler contours respectively. Using the criteria that maximum image information content is obtained if the downrange and crossrange directions are orthogonal, a metric is postulated which relates the isorange and isoDoppler contour grandients in such a way as to be maximum under such orthogonal conditions. A secondary measure of merit is also postulated related to isoDoppler contour density and crossrange resolution for a given CPI. A good qualitative correlation between the image metrics and reconstructed images of a representative collection of point scatterers was shown to exist.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The demand for high-resolution ISAR data on tactical targets at all radar bands has been growing steadily. Here we describe a new 350GHz compact range currently being constructed to acquire fully polarimetric X-band data using 1/35th scale models. ERADS currently operates compact ranges from X to W-band using 1/16th scale models. The addition of this new compact range using 1/35th scale models will permit the measurement of larger targets and the measurement of multiple targets arrangeed in a scene. It will also allow us to take advantage of teh large number of commercially available models at 1/35th scale. The 350GHz transceiver uses two high-stability optically pumped far-infrared lasers, microwave/laser 350GHz mixer side-band generation for frequency sweep, and a pair of waveguide mounted diode receivers for coherent integration. The 35GHz bandwidth at a center frequency of 350GHz will allow the X-band transceiver system to collect data with up to 6-inch down range resolution, with a round trip half power beam diameter corresponding to 60 feet. Tactical targets may be measured in free space or on various ground planes, which simulate different types of terrain. Compact range measurements of simple calibration objects have been performed and compared to theoretical results using computer code predictions. A correlation study of X-band data using field measurements, 1/35th scale models and 1/16th scale models is planned upon completion of compact range construction. Available results of the diagnostic testes and the correlation study will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel approach to multi-sensor statistical modeling of bi-directional texture functions (BTF). Our proposed BTF modeling approach is based on (1) conducting an analytical study that relates a sensor resolution to the size and shape of elements forming material surface, (2) developing a robotic system for laboratory BTF data acquisition, (3) researching an application of the Johnson family of statistical probability distribution functions (PDF) to BTF modeling, (4) selecting optimal feature space for statistical BTF modeling, (5) building a database of parameters for the Johnson family of PDFs that after interpolations forms a high-dimensional statistical BTF model and (6) researching several statistical quality metrics that can be used for verification and validation of the obtained BTF models. The motivation for developing the proposed statistical BTF modeling approach comes from the facts that (a) analytical models have to incorporate randomness of outdoor scene clutter surfaces and (b) models have to be computationally feasible with respect to the complexity of modeled interactions between light and materials. The major advantages of our approach over other approaches are (a) the low computational requirements on BTF modeling (BTF model storage, fast BTF model-based generation), (b) flexibility of the Johnson family of PDFs to cover a wide range of PDF shapes and (c) applicability of the BTF model to a wide range of spectral sensors, e.g., color, multi-spectral or hyperspectral cameras. The prime applications for the proposed BTF model are multi-sensor automatic target recognition (ATR), and scene understanding and simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an algorithm for the automatic georegistration of electro-optical (EO) and synthetic aperture radar (SAR) imagery intelligence (IMINT). The algorithm uses a scene reference model in a global coordinate frame to register the incoming IMINT, or mission image. Auxiliary data from the mission image and this model predict a synthetic reference image of a scene at the same collection geometry as the mission image. This synthetic image provides a traceback structure relating the synthetic reference image to the scene model. A correlation matching technique is used to register the mission image to the synthetic reference image. Once the matching has been completed, mission image pixels can be transformed into the corresponding synthetic reference image. Using the traceback structure associated with the synthetic reference image, these pixels can then be transformed into the scene model space. Since the scene model space exists in a global coordinate frame, the mission image has been georegistered. This algorithm is called Prediction-Based Registration (PBR).
There are a number of advantages to the PBR approach. First, the transformation from image space to scene model space is computed as a 3D to 2D transformation. This avoids solving the ill-posed problem of directly transforming a 2D image into 3D space. The generation of a synthetic reference simplifies the image matching process by creating the synthetic reference at the same geometry as the mission image. Further, dissimilar sensor phenomenologies are accounted for by using the appropriate sensor model. This allows sensor platform and image formation errors to be accounted for in their own domain when multiple sensors are being registered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
VHF-band SAR used in conjunction with change detection techniques has shown promising results for wide-area surveillance of ground targets . By using VHF-band frequencies both targets in the open as well as concealed by foliage may be detected. These detections occur with high probability and with a low false-alarm rate. VHF-band SAR is able to detect hidden targets because both foliage attenuation and clutter backscatter is small. The clutter is further repressed through the use of change detection, thus significantly reducing the false-alarm rate. Change detection techniques are well suited for VHF-band SAR since temporal decorrelation is small at these large wavelengths.
The CARABAS-II system performed a data collection during the summer of 2002. The primary goal of this collection was to gather data to evaluate VHF-band SAR change detection performance under various operating conditions. This paper reports the results obtained. In general, the results show a VHF-band SAR system employing change detection can reliably and robustly detect truck-sized targets hidden in foliage. The detection performance does deteriorate under certain conditions. A significant reduction is found for near-grazing angles. Additionally, a significant performance loss is found for smaller-sized targets when the radar bandwidth is reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a physically-based clutter model for low frequency synthetic aperture radar that includes both distributed scatterers
and large-amplitude discrete clutter. The model is used to generate a synthetic forest clutter scene comprised of two
components, a background component and a heavy-tailed discrete component. Model parameters are based on characteristics of the
scene, such as the radar cross-section of trees, forest thickness, and background radar cross-section. A synthetic SAR image of
the scene is generated by modelling the radar imaging process as a lowpass filter and convolving the scene with the impulse
response of the radar. We compare the synthetic, single-pass clutter image to measured data and present a metric for evaluating
model fit. We also extended the model to describe correlated, multi-pass images for change detection applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To recognize an object in an image, an algorithm must identify not only the object pixels, but also non-object clutter pixels. Non-object pixels can be assessed with a priori clutter models that account for the varying terrain and cultural objects. Radar clutter models have been well developed; however, these models typically incorporate a single distribution to capture background effects. In this paper, we propose to use a fusion of distributions through mixture modeling to characterize various background clutter information so as to more accurately develop a clutter model useful for object recognition. In a radar example, we show a fused-distribution using a Rayleigh and Pareto model describing the average and heavy tail clutter characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High resolution Synthetic Aperture Radar (SAR) imagery (e.g., four inch or better resolution) contains features not seen in one foot or lower resolution imagery, due to the isolation of the scatterers into separate resolution cells. These features provide the potential for additional discrimination power for Automatic Target Recognition (ATR) systems. In this paper, we analyze the performance of the Real-Time MSTAR (RT-MSTAR) system as a function of image resolution. Performance is measured both in terms of the probability of correct identification on military targets, and also in terms of confuser rejection. The analysis demonstrates two factors that significantly enhance performance. First, use of the high resolution imagery results in much higher probability of correct identification, as demonstrated using Lynx SAR imagery at 4" and 12". Second, incorporating models of the confusers, when available, greatly reduces false alarms, even at higher resolutions. Several new areas of work emerge, including making use of higher-level feature information available in the imagery, and rapid creation of models for vehicles that pose particular confuser rejection challenges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classification of three types of ground vehicle targets from the MSTAR (Moving and Stationary Target Acquisition and Recognition) database is investigated using hidden Markov models (HMMs) and synthetic aperture radar images. The HMMs employ training sets of six power spectrum features extracted from High Range Resolution (HRR) radar signal magnitude versus range profiles of the targets for uniform sequences of aspect angles (7 degree separation). Classification accuracy versus numbers of hidden states (from 3 to 30), sequence length (3, 10, 15, and 30), and discretization level of the features (10 and 30 levels) is explored using test and validation data. Best classification (94% correct) is achieved for 3 hidden states, a sequence length of 30, and 10 feature levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many years of tracking research have shown that the greatest obstacle to effective track estimation is accurately associating sensor kinematic reports to known tracks, new tracks, or clutter. Errors in report association occur more frequently under increasingly stressful conditions, like closely-spaced targets and low measurement rates, which can lead to unstable and even divergent tracking performance. It is widely expected that adding target features will aid report association and result in enhanced track accuracy and lengthened track life. Although sensors can provide features to enhance association, progress in implementing feature aiding has been slowed by the lack of data and tools that could assist exploration and algorithm development. To encourage research in this important discipline, the Sensors Directorate of the Air Force Research Laboratory (AFRL/SN) is sponsoring a challenge problem called Feature-Aided Tracking of Stop-move Objects (FATSO). FATSO's long-range goal is to provide a full suite of public data and software to promote explorations into viable methods of feature aiding. This paper introduces the FATSO project, focusing on an upcoming release that will contain data from a diverse target set and predictor software for generating radar signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Classification of 3D objects is becoming an increasingly important research area due to cheap and innovative sensor technology. Shadows, noise, viewing direction, and distance from the sensor all directly affect the quality and amount of surface information provided by the sensor. The recognition approach described in this paper converts surface information, a set of (x,y,z) points, into a discrete 3D binary image. This conversion step processes the surface points using a fuzzy technique to mitigate the effects of noise and minor distortions. These images are then processed by sequences of one or two randomly selected morphological operators. Each of the sequences' output is then fed into a simple transducer to obtain a set of scalar feature values. The feature values are classified using a K nearest neighbor (KNN) classifier that is trained using a sparse number of training samples. Experiments were conducted using the Air Force Research Laboratory's E3D data and experimental protocol. Experimental results for the tank classification problem using 10 tanks and 26 confusers are presented. The results show the combination of morphological processing and KNN classifier produced consistently good performance under variations in noise, viewing angle, or distance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Targets may be more likely than non-targets to occur in groups. "Group detection" algorithms exploit this property of target behavior to improve the performance of a detection system. This paper develops some of the issues to be addressed when assessing the performance of a group detection algorithm. Two basic cases are considered, one where object detection is the goal (with group detection as an intermediate tool) and one where group detection is directly the goal. To understand the benefits of group detection algorithms in object detection, we propose considering pre-group to post-group object-level false alarm rate at a fixed detection probability. To understand the relative ease of group detection as an end in itself versus object detection, object-level Receiver Operating Characteristic (ROC) curves may be compared to group-level ROCs. The significance of the assessment approach is demonstrated, where different assessment approaches can produce apparent benefits that differ by several orders-of-magnitude. In addition to the methodology dependence, performance has the usual dependence on operating conditions (OCs), including the target grouping behavior (frequency of group sizes, spatial separation, and mismatch between model and reality), spatial dependence in clutter objects, and the pre-group object-level ROC (which in-turn depends on classical OCs).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A decision system may compute a score that reflects its confidence in one of its decisions. This paper considers methods for evaluating such scores. There is a class of measures-of-performance (MOPs) for each of the score's two roles: discrimination (how well it separates targets and clutter) and confidence (how well it predicts its own accuracy). Area-under-the-ROC and probability of error are considered as discrimination MOPs. Error in the posterior (EP) and normalized cross entropy (NCE) are considered as confidence MOPs. MOPs for the scores are assessed using Monte Carlo simulations where known score distributions are sampled, allowing comparison of true and estimated MOPs. Classical data-direct ROC estimates are found to be equivalent to those based on explicit distribution estimation using probability mass functions (pmfs). An alternative distribution estimation based on histograms is recommended for empirical ROCs, being accurate and avoiding the unnatural stair-step character of data-direct ROCs. Confidence MOPs are more difficult to estimate than discrimination MOPs and NCE estimates are especially poor. EP may be meaningfully estimated through histogram density estimates and it is recommended as a replacement for the AdaptSAPS binned EP confidence MOP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability of certain performance metrics to quantify how well target recognition systems under test (SUT) can correctly identify targets and non-targets is investigated. The SUT assigns a score between zero and one which indicates the predicted probability of a target. Sampled target and non-target SUT score outputs are generated using representative sets of Beta probability densities. Two performance metrics, Area under the Receiver Operating Characteristic (AURC) and Confidence Error (CE) are analyzed. AURC quantifies how well the target and non-target distributions are separated, and CE quantifies the statistical accuracy of each assigned score. CE and AURC are generated for many representative sets of beta-distributed scores, and the metrics are calculated and compared using continuous methods as well as discrete (sampling) methods. Close agreement in results with these methods for AURC is shown. Also shown are differences between calculating CE using sampled data and calculating CE using continuous distributions. These differences are due to the collection of similar sampled scores in bins, which results in CE weighting proportional to the sum of target and non-target scores in each bin. A method for an alternative weighted CE calculation using maximum likelihood estimation of density parameters is identified. This method enables sampled data to be processed using continuous methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A strong and growing interest in systems that adapt to changing circumstances was evident in panel discussions at the "Algorithms for SAR Imagery" Conference of the AeroSense Symposium in April 2003, with DARPA, Air Force, industry and academia participation. As a result, Conference Co-Chair Mr. Ed Zelnio suggested producing a dynamic model to create problem sets suitable for adaptive system research and development. Such a problem set provides a framework for the overall problem, including organization of operating conditions, performance measures and specific test cases. It is hoped that this AdaptSAPS framework will help provide the community with a more concrete base for discussing adaptation in SAR imagery exploitation. AdaptSAPS Version 1.0 was produced by the AFRL COMPASE and SDMS organizations and posted on 5 August 2003. AdaptSAPS consists of over a dozen MatLab programs that allow the user to create "missions" with SAR data of varying complexities and then present that test data one image at a time, first as unexploited imagery and then later with the exploitation results that an ATR could use for adaptation in an operational environment. AdaptSAPS keeps track of performance results and reports performance measures. This paper describes AdaptSAPS - its application process and possible improvements as a problem set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ATR community has a strong and growing interest in ATR systems that adapt to changing circumstances and is developing means to solve these dynamic and difficult ATR problems. To facilitate this research, the AFRL COMPASE and SDMS organizations have developed an AdaptSAPS framework for developing and assessing such adaptive ATR systems. This framework, in the form of AdaptSAPS Version 1.0, provides MATLAB code, organized procedures, and an organized database for adaptive ATR systems.
SAIC is applying their Ellipse Detector (ED) to this framework to validate the AdaptSAPS procedures and to test the AdaptSAPS database. The ED previously has shown utility on a variety of sensors and ATR problems. Although computationally efficient, the ED is more complex and much more powerful than simpler detectors such as a two parameter CFAR. However, the ED is not currently implemented as an adaptive ATR.
In this paper, we show the utility of the AdaptSAPS framework for developing and assessing a non-trivial adaptive ATR by embedding the SAIC ED in the AdaptSAPS framework. We point out the strong points and weak points of AdaptSAPS Version 1.0 and recommend enhancements for future versions. In particular, we comment on AdaptSAPS as delivered, the current missions and data bases in AdaptSAPS, and the current performance measures in AdaptSAPS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We develop a radar-based automatic target recognition approach for
partially occluded objects. The approach may be variously posed as
an optimization problem in the phase history, scene reflectivity
and feature domains. The latter consists of point scattering
features estimated from the phase histories or corresponding
images. We adopt simple occlusion models in which the physical
scattering responses (isotropic scattering centers, attributed
scatterers, etc.) can be occluded in any combination. The
formulation supports the use of prior occlusion models (e.g., that
occlusion is spatially correlated rather than randomly
distributed). We introduce a physics-based noise covariance model
for use in cost or objective functions. Occlusion model estimation
is a combinatorial problem since the optimal subset of scatterers
must be discovered from a potentially much larger set. Further,
the number of occluded scatterers must be estimated as a part of
the solution. We apply a genetic algorithm to solve the
combinatorial problem, and we provide a simple demonstration
example using synthetic data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a performance model for estimating the likelihood function and posterior probability of classes in a multiple-look SAR ATR classifier. We extend performance estimation to performance prediction in order to assess the effects of additional looks at different targets in a scene. This likelihood improvement model depends on a variety of factors including the resulting look angle diversity and the resolution of the sensor. The performance model parameters are estimated from classification scores and multi-look performance with real data, but could also be developed from simulations in cases where no data exist. Finally, we propose a transformation from the predicted performance to a value for each look that is used to optimize asset tasking. The value transformation is based on the target importance and absolute posterior probability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This discussion/tutorial consists of a few short discussions on the (theoretical) trade-offs of various choices in constructing benchmarks. Most of the results discussed here are "common sense" at high level. However, all of this "common sense" is (to some extent) quantifiable common sense, and occasionally that quantification is useful. These short discussions cover: 1) Prediction Domains & Loss functions, 2) Prediction settings, 3) Assumption Failures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A performance model for FLIR automatic target recognition is discussed. Key aspects of this model are that (a) relationships between sensor optical resolution, sampling, noise and estimated P(ID) are implicitly defined, (b) premised on the use of the particular features that are used, the analysis of the matching structure leads to an explicit "shape similarity" measure between targets, (c) the notion of "shape" includes both internal signature attributes and external contour information; (d) the values of this shape measure can be measured for both true and false target models using combined CAD rendering, sensor models, and features, (e) in addition to the P(ID), the system also is able to predict the probability of declaration P(Declare|Target) for a given true target, (f) the system is able to predict the probability of false declaration for a given confuser or confuser to target similarity specification, (g) M (with M greater than or equal to 2) class problems are able to be handled, and (h) the diagonals along confusion matrices can be estimated directly using this approach. The model relies on analysis of performance of a particular type of shape-based features, with the goal of developing explicit relationships from low level features through high level model matching. Based on the predicted densities of the ensemble of features, the system approximates an expression for the likelihood of the observed features under noisy conditions with a given sensor, conditioned on the target type, aspect, and range. Using some engineering approximations that relate to the distance transform-type method of matching that is analyzed, a tractable form of a non-unique correspondence based approximate likelihood expression is obtained, which can be used to estimate bounds on the performance of similar sensor/ATR systems that rely on these features. Such an approach could also be applied to other phenomenologies, such as synthetic aperture radar, using an appropriate low level model of the extracted features. Predictive models using a CAD based target signature rendering package have been used to generate target signatures. Trade-offs for various combinations of sensor/algorithm design parameters are in principle able to be carried out quickly and easily using this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An ATR theory is useful both for developing of ATR algorithms and
evaluating their performance. We present here a model-based ATR theory
based on hierarchies of parts. Objects and parts are represented as
nodes in an attributed graph, while the links between nodes
represent invariant relations between the parts. These can be
either geometric (quantitative) invariants or structural (qualitative)
ones. A metric is used to measure the distance of an object from
known models, based on the distances of the parts. This is different from traditional pixel-based or template-based metrics which are very
sensitive to any variation in the object. Unlike previous graph-based
methods, we do not try to segment the object before recognizing it. Rather, the segmentation is guided by the models and goes hand-in-hand with the recognition process. The theory has been implemented for simple cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Issues in ATR Theory emerge by considering three levels of the ATR problem. The term "monolithic architecture (MA)-ATR" is used for problems of standard classification theory. The MA-ATR level has seen recent unification of theories that should be aggressively applied. Modern ATR systems include standard classification theoretic subsystems (e.g., feature extraction, matching, and discrimination); however they also add modeling within a search paradigm. These "aggregate architecture (AA)-ATRs" allow more direct inclusion of application-specific prior (non-sample) knowledge. Greater theoretical support is needed for analyzing AA-ATRs at the system level and integrating the strong MA-ATR theories. The third level of the ATR problem returns to the MA-ATR problem and below. The strongest elements of the MA-ATR theories deal with the stochastic aspects of the ATR problem. Structural aspects of ATRs are an important weak link in the MA-ATR theories. Function decomposition provides an "atom" towards a structural theory. Decomposition provides robustness by constructing the MA-ATR's structure from samples, but is intractable. Standard MA-ATR design is tractable, but is brittle because of an ad hoc structure selection. The key issue in either case is to make explicit use of non-sample (typically structural) knowledge in selecting or, better yet, constructing the MA-ATR's structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method of on-the-fly training is presented that uses shape features to store representations of previously seen vehicles. Relationships between features are exploited such that recognition is possible over a range of relative sensor to target geometries, given a single or limited number of previously seen views. Initial results on SAR data has used zero crossings on filtered data, in addition to peak features, to perform adaptive matching. Using the AFRL ADAPTSAPS system, results for this adaptive approach are presented and discussed. Using a relatively limited number of previously seen samples of a target, the system under test in these experiments was able to start differentiating a selected target type from other targets and from confusers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.