We reported previously on novel methods for designing optical systems with no prior specification of system architecture (e.g., number and type of lenses and their order in the optical train). We studied their efficacy on a set of problems where aberration minimization at the focal plane was the only design objective. Here, we describe enhancements that allow multiple additional objectives (e.g., system sensitivity, size, cost) to be included in a principled manner and show how a family of non-dominated" systems that approximate the set of designs belonging to a theoretical Pareto front of optimal solutions may be efficiently created.
Commercial ray-tracing programs can optimize system parameters for a given optical architecture (e.g., radii, thickness, and spacing of lenses); however, design of the underlying system architecture (e.g., number of lenses, their type, and their order in the optical train) remains an expensive trial-and-error approach driven by prior experience and human intuition. Work on automating the architecture design process has had some success but the problem remains open. We compare the efficacy of novel methods for encoding the design of optical systems on a simple imaging objective.
We investigate an anomaly detection framework that uses manifold-based distances within the existing skeleton kernel principle component analysis (SkPCA) manifold-learning technique. SkPCA constructs a manifold from the an adjacency matrix built using a sparse subsample of the data and a similarity measure. In anomaly detection the relative abundance of the anomalous class is rare by definition and in practice anomalous samples are unlikely to be randomly selected for inclusion in the sparse data subsample. Thus, anomalies should not be well modeled by the SkPCA-constructed model. Here, we consider alternative distance measures based on viewing spectral pixels as points in projective space, that is, each pixel is a 1-dimensional line through the origin. Chordal and geodesic distances are computed between hyperspectral pixels and detection performance leveraging these distances is compared to alternative anomaly detection algorithms. In addition, we introduce Ensemble SkPCA which utilizes the ensemble of mean, normalized detection scores corresponding to multiple randomly generated skeletons. For acceptable false alarm tolerances, the ensemble detection score derived from chordaland geodesic-based methods achieves higher probability of detection than Euclidean distance-based Ensemble SkPCA or the benchmark RX algorithm.
KEYWORDS: Data modeling, Image fusion, Hyperspectral imaging, Principal component analysis, Multispectral imaging, Data fusion, Detection and tracking algorithms
Kernel-based methods for anomaly detection have recently shown promising results - surpassing those of model-based statistical methods. This success is due in part to the distribution of the non-anomalous data failing to conform to the distribution model assumed by model-based statistical methods. Alternatively, the skeleton kernel principle component analysis anomaly detector (sKPCA-AD) assumes that a better background model can be learned by constructing a graph from a small, randomly sampled subset of the data (a skeleton). By definition, anomalies are rare and thus the sampling is assumed to be comprised chiefly of non-anomalous samples and correspondingly the learned graph models the background. Error magnitudes in the models' representation of data from the full data set are used as an anomaly measure. Additionally, the smaller skeleton sample makes kernel methods computationally feasible for hyperspectral images.
The sKPCA-AD has proven successful using unordered spectral pixel data, however, anomalies are often larger objects composed of many neighboring pixels. In this paper we show that fusing spatial information derived from a panchromatic image with spectral information from a hyper/multispectral image can increase the accuracy of the sKPCA-AD. We accomplish this by creating several joint spectral-spatial kernels that are then used by the sKPCA-AD to learn the underlying background model. We take into account the variability introduced by the random subsampling by showing averaged results and variance over several skeletons. We test our methods on two representative datasets and our results show improved performance with one of the proposed joint kernel methods.
We investigate an anomaly detection framework that leverages manifold learning techniques to learn a background model. A manifold is learned from a small, uniformly sampled subset under the assumption that any anomalous samples will have little effect on the learned model. The remaining data are then projected into the manifold space and their projection errors used as detection statistics. We study detection performance as a function of the interplay between sub-sampling percentage and the abundance of anomalous spectra relative to background class abundances using synthetic data derived from field collects. Results are compared against both graph-based and traditional statistical models.
A 16-band plenoptic camera allows for the rapid exchange of filter sets via a 4x4 filter array on the lens's front aperture. This ability to change out filters allows for an operator to quickly adapt to different locales or threat intelligence. Typically, such a system incorporates a default set of 16 equally spaced at-topped filters. Knowing the operating theater or the likely targets of interest it becomes advantageous to tune the filters. We propose using a modified beta distribution to parameterize the different possible filters and differential evolution (DE) to search over the space of possible filter designs. The modified beta distribution allows us to jointly optimize the width, taper and wavelength center of each single- or multi-pass filter in the set over a number of evolutionary steps. Further, by constraining the function parameters we can develop solutions which are not just theoretical but manufacturable. We examine two independent tasks: general spectral sensing and target detection. In the general spectral sensing task we utilize the theory of compressive sensing (CS) and find filters that generate codings which minimize the CS reconstruction error based on a fixed spectral dictionary of endmembers. For the target detection task and a set of known targets, we train the filters to optimize the separation of the background and target signature. We compare our results to the default 16 at-topped non-overlapping filter set which comes with the plenoptic camera and full hyperspectral resolution data which was previously acquired.
We investigate the parameters that govern an unsupervised anomaly detection framework that uses nonlinear techniques to learn a better model of the non-anomalous data. A manifold or kernel-based model is learned from a small, uniformly sampled subset in order to reduce computational burden and under the assumption that anomalous data will have little effect on the learned model because their rarity reduces the likelihood of their inclusion in the subset. The remaining data are then projected into the learned space and their projection errors used as detection statistics. Here, kernel principal component analysis is considered for learning the background model. We consider spectral data from an 8-band multispectral sensor as well as panchromatic infrared images treated by building a data set composed of overlapping image patches. We consider detection performance as a function of patch neighborhood size as well as embedding parameters such as kernel bandwidth and dimension. ROC curves are generated over a range of parameters and compared to RX performance.
We exploit manifold learning algorithms to perform image classification and anomaly detection in complex scenes involving hyperspectral land cover and broadband IR maritime data. The results of standard manifold learning techniques are improved by including spatial information. This is accomplished by creating super-pixels which are robust to affine transformations inherent in natural scenes. We utilize techniques from harmonic analysis and image processing, namely, rotation, skew, flip, and shift operators to develop a more representational graph structure which defines the data-dependent manifold.
We introduce a detection and tracking algorithm for panoramic imaging systems intended
for operations in high-clutter environments. The algorithm combines correlation- and model-based
tracking in a manner that is robust to occluding objects but without the need for a separate
collision prediction module. Large data rates associated with the panoramic imager necessitate
the use of parallel computation on graphics processing units. We discuss the queuing and
tracking algorithms as well as practical considerations required for real-time implementation.
An automated approach for detecting the presence of watercraft in a maritime environment characterized by regions of land, sea, and sky, as well as multiple targets and both water- and land-based clutter, is described. The detector correlates a wavelet model of previously acquired images with those obtained from newly acquired scenes. The resulting detection statistic outperforms two other detectors in terms of probability of detection for a given (low) false alarm rate. It is also shown how the detection statistics associated with different wavelet models can be combined in a way that offers still further improvements in performance. The approach is demonstrated to be effective in finding watercraft in previously collected short-wave infrared imagery.
KEYWORDS: Associative arrays, Data modeling, Image quality, Image compression, Super resolution, Denoising, Chemical species, Wavelets, Video, Short wave infrared radiation
We present several improvements to published algorithms for sparse image modeling with the goal of
improving processing of imagery of small watercraft in littoral environments. The first improvement
is to the K-SVD algorithm for training over-complete dictionaries, which are used in sparse
representations. It is shown that the training converges significantly faster by incorporating multiple
dictionary (i.e., codebook) update stages in each training iteration. The paper also provides several
useful and practical lessons learned from our experience with sparse representations. Results of three
applications of sparse representation are presented and compared to the state-of-the-art methods; image
compression, image denoising, and super-resolution.
We present an approach for discriminating among dierent classes of imagery in a scene. Our intended application
is the detection of small watercraft in a littoral environment where both targets and land- and sea-based clutter
are present. The approach works by training dierent overcomplete dictionaries to model the dierent image
classes. The likelihood ratio obtained by applying each model to the unknown image is then used as the
discriminating test statistic. We rst demonstrate the approach on an illustrative test problem and then apply
the algorithm to short-wave infrared imagery with known targets.
We present a technique for small watercraft detection in a littoral environment characterized by multiple targets
and both land- and sea-based clutter. The detector correlates a tailored wavelet model trained from previous
imagery with newly acquired scenes. An optimization routine is used to learn a wavelet signal model that
improves the average probability of detection for a xed false alarm rate on an ensemble of training images.
The resulting wavelet is shown to improve detection on a previously unseen set of test images. Performance is
quantied with ROC curves.
KEYWORDS: Diffusion, Sensors, Image fusion, Data acquisition, Physics, Data fusion, Principal component analysis, Image registration, Signal processing, Analytical research
This work considers the problem of combining high dimensional data acquired from multiple sensors for the
purpose of detection and classification. The sampled data are viewed as a geometric object living in a highdimensional
space. Through an appropriate, distance preserving projection, those data are reduced to a lowdimensional
space. In this reduced space it is shown that different physics of the sampled phenomena reside on
different portions of the resulting "manifold" allowing for classification. Moreover, we show that data acquired
from multiple sources collected from the same underlying physical phenomenon can be readily combined in the
low-dimensional space i.e. fused. The process is demonstrated on maritime imagery collected from a visible-band
camera.
We use a nonlinear dimensionality reduction technique to improve anomaly detection in a hyperspectral imaging
application. A nonlinear transformation, diffusion map, is used to map pixels from the high-dimensional spectral
space to a (possibly) lower-dimensional manifold. The transformation is designed to retain a measure of distance
between the selected pixels. This lower-dimensional manifold represents the background of the scene with high
probability and selecting a subset of points reduces the computational overhead associated with diffusion map.
The remaining pixels are mapped to the manifold by means of a Nystr¨om extension. A distance measure is
computed for each new pixel and those that do not reside near the background manifold, as determined by
a threshold, are identified as anomalous. We compare our results with the RX and subspace RX methods of
anomaly detection.
KEYWORDS: Nonlinear filtering, Signal detection, Signal processing, Receivers, Statistical analysis, Electronic filtering, Linear filtering, Sensors, Complex systems, Analytical research
Higher-order spectral analysis is one approach to detecting deviations from normality in a received signal. In
particular the auto-bispectral density function or "bispectrum" has been used in a number of detection applications.
Both Type-I and Type-II errors associated with bispectral detection schemes are well understood if the
processing is performed on the received signal directly or if the signal is pre-processed by a linear, time invariant
filter. However, there does not currently exist an analytical expression for the bispectrum of a non-Gaussian
signal pre-processed by a nonlinear filter. In this work we derive such an expression and compare the performance
of bispectral-based detection schemes using both linear and nonlinear receivers. Comparisons are presented in
terms of both Type-I and Type-II detection errors using Receiver Operating Characteristic curves. It is shown
that using a nonlinear receiver can offer some advantages over a linear receiver. Additionally, the nonlinear
receiver is optimized using genetic programming (differential evolution) to choose the filter coefficients.
Numerical simulations are used to improve in-band disruption of a phase-locked loop (PLL). Disruptive inputs
are generated by integrating a system of nonlinear ordinary differential equations (ODEs) for a given set of
parameters. Each integration yields a set of time series, of which one is used to modulate a carrier input to the
PLL. The modulation is disruptive if the PLL is unable to accurately reproduce the modulation waveform. We
view the problem as one of optimization and employ an evolutionary algorithm to search the parameter space of
the excitation ODE for those inputs that increase the phase error of the PLL subject to restrictions on excitation
amplitude or power. Restricting amplitude (frequency deviation) yields a modulation that approximates a
square wave. Constraining modulation power leads to a chaotic excitation that requires less power to disrupt
loop operation than either the sinusoid or square wave modulations.
KEYWORDS: Autoregressive models, Microsoft Foundation Class Library, Modulation, Evolutionary algorithms, Structural health monitoring, Waveguides, Ultrasonics, Detection and tracking algorithms, Feature extraction, Damage detection
Recent research has shown that chaotic structural excitation and state space reconstruction may be used beneficially
in structural health monitoring (SHM) processes. This relationship has been exploited for use in detection of bolt
preload reduction by using a chaotic waveform with ultrasonic frequency content with a damage detection algorithm
based on auto-regressive (AR) modeling. The signal is actively applied to a structure using a bonded macro fiber
composite (MFC) patch. The response generated by the mechanical interaction of the MFC patch with the structure is
then measured by other affixed MFC patches. In this study the suitability of particular chaotic waveforms will be
investigated through the use of evolutionary algorithms. These algorithms are able to find an optimum excitation for
maximum damage state discernability whose fitness is two orders of magnitude greater than choosing random
parameters for signal creation.
We have demonstrated that the parameters of a system of ordinary differential equations may be adjusted via an
evolutionary algorithm to produce 'optimized' deterministic excitations that improve the sensitivity and noise
robustness of state-space based damage detection in a supervised learning mode. Similarly, in this work we show
that the same approach can select an 'optimum' bandwidth for a stochastic excitation to improve the detection
capability of that same metric. This work demonstrates that an evolutionary algorithm can be used to shape or color
noise in the frequency domain, such that improvement is seen in the sensitivity of the detection metric. Properties of
the improved stochastic excitations are compared to their deterministic counterparts and used to draw inferences
concerning a globally preferred excitation type for the model spring-mass system.
Dynamic interrogation of structures for the purposes of damage identification is an active area of research within the field of structural health monitoring with recent work focusing on the use of chaotic excitations and state-space analyses for improved damage detection. Inherent in this overall approach is the specific interaction between the chaotic input and the structure's eigenstate. The sensitivity to damage is theoretically enhanced by special tailoring of the input in terms of stability interaction with the structure. This work outlines the use of an evolutionary program to search the parameter space of a chaotic excitation for those parameters that are best suited to appropriately couple the excitation with the structure for enhanced damage detection. State-space damage identification metrics are used to detect damage in a computational model driven by excitations produced via the evolutionary program with non-optimized excitations used as comparison cases.
One paradigm within the structural health monitoring field involves analyzing the vibration response of structures as a method of detecting damage. Recent work has focused on extracting damage-sensitive features from the state-space representation of the structural response. Some of these features involve constructing a baseline attractor and an attractor at some later time and using the baseline to predict the evolution of the future attractor. An inability to accurately predict said evolution can be construed as possible damage to the structure. Such attractor-based methods are sensitive to a number of parameters related to reconstruction of the attractor, prediction techniques, and statistical accuracy. This work couples various input excitations with experimental data in an attempt to optimize these parameters for maximum sensitivity to damage.
Detection of the change in the vibration response of a structure as a means of damage detection has long been explored in the structural health monitoring field. Recently, damage detection metrics based on state-space attractor comparisons have been presented in the literature. This work compares various state-space attractor methods within an experimental context in an effort to determine the sensitivity of the methods to induced damage. The various methods are judged according to damage discrimination capability and computational effort.
Structural system identification, historically, has largely consisted of seeking linear relationships among vibration time series data, e.g., auto/cross-correlations, modal analysis, ARMA models, etc. This work considers how dynamical relationships may be viewed in terms of 'information flow' between different points on a structure. Information or interdependence metrics (e.g., time-delayed mutual information) are able to capture both linear and nonlinear aspects of the dynamics, including higher-order correlations. This work computes information-based metrics on a frame experiment where nonlinearity is introduced by the loosening of a bolt. Both linear and nonlinear measures of dynamical interdependence are then used to assess the degree of degradation to the joint. Results indicate clear differences in the way linear and nonlinear measures quantify the bolt loosening.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.