PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We study by way of simulations the creation and propagation of noise in a dispersive medium. The noise is
generated as the sum of elementary signals and we study the noise like behavior in regards to the properties
of the elementary signals and the dispersion relation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A robust target tracking algorithm is proposed to overcome a number of challenges often associated with the FLIR
imagery. Several disjoint intermediate background models are used to form an accurate and dynamic representation of
the current background. The signatures of moving targets are captured and enhanced through a set of image filters,
while the next movements of these targets are reasonably estimated using a set of kinematic predictors. By integrating
the effective target detection method with the robust background modeling process, an excellent target tracking
performance can be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Critical to a large portion of mission scenarios within the intelligence, surveillance, and reconnaissance (ISR) sensor
community is the challenge to ensure designated targets of interest are reliably tracked in dynamic environments.
Current generation trackers frequently loose track when targets become temporarily obscured, shadowed, or is in close
proximity to other objects. In this paper we propose and demonstrate a generic confirmation of identity module that is
based on the Distance Classifier Correlation Filter (DCCF) and is applicable to a variety of tracking technologies. The
prevailing idea of this technique is that during a trackers valid track phase, learning exemplars are provided to a filter
building process and templates of the tracked targets are created real-time online. Differences in orientation are handled
through the creation of synthetic views using real target views and image warping techniques. After obscuration and/or
during periods of track ambiguity, each new candidate track is matched against the prior valid track(s) using DCCF
matching to resolve uncertainty.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an automatic approach for camera/image based detection, recognition and tracking of flying objects
(planes, missiles, etc.). The method detects appearing objects, and recognizes re-appearing targets. It uses a feature-based
statistical modeling approach (e.g. HMM) for motion-based recognition, and an image feature (e.g. shape) based indexed
database of pre-trained object classes, suitable for recognition on known and alerting on unknown objects. The method can
be used for detection of flying objects, recognition of the same object category through multiple views/cameras and signal
on unusual motions and shape appearances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Learning is one of the most crucial components, which increases generality, flexibility, and robustness of computer
vision systems. At present, image analysis algorithms adopt particular machine learning methods resulting in rather
superficial learning. We present a new paradigm for constructing essentially learnable image analysis algorithms.
Learning is interpreted as optimization of image representations. Notion of representation is formalized within
information-theoretic framework. Optimization criterion is derived from well-known minimum description length
(MDL) principle. Adaptation of the MDL principle in computer vision has been receiving increasing attention. However,
this principle has been applied in heuristic way. We deduced representational MDL (RMDL) principle that fills the gap
between theoretical MDL principle and its practical applications. The RMDL principle gives criteria both for optimal
model selection of a single image within given representation, and for optimal representation selection for an image
sample. Thus, it can be used for optimization of computer vision systems functioning within specific environment.
Adequacy of the RMDL principle was validated on segmentation-based representations applied to different object
domains. A method for learning local features as representation optimization was also developed. This method
outperformed some popular methods with predefined representations such as SURF. Thus, the paradigm can be admitted
as promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a wideband jamming emitter localization method based on the fusion of multiple direction-of-arrivals
(DOAs) and time-difference-of-arrival(s) (TDOA(s)) obtained from multiple unmanned-aerial-vehicles (UAVs).
In this technique, we assume that multiple trajectory controllable UAVs are available and they are equipped with smart
antennas to estimate the emitter's angle (DOA) with respect to themselves. In addition, UAVs communicate with each
other to support the estimation of TDOA between pair(s) of UAVs and the emitter. The obtained DOA and TDOA
information is fused at one UAV using an extended Kalman filter (EKF) to localize and track the mobile/static jamming
emitter. In this method, we use DOA fusion to provide a good initialization of the emitter position to guarantee the
EKF's convergence speed. The emitter localization accuracy is provided by the fusion of TDOAs if three or more UAVs
are available. The simulation results show that the localizing-tracking error is less than 15% of the range difference
estimation error standard deviation with three UAVs that is accurate enough for jamming emitter destruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a novel image noise reduction strategy based on the use of adaptive image filter kernels. Three
adaptive filtering techniques are discussed and a case study based on a novel Adaptive Gaussian Filter is presented. The
proposed filter allows the noise content of the imagery to be reduced whilst preserving edge definition around important
salient image features. Conventional adaptive filtering approaches are typically based on the adaptation of one or two
basic filter kernel properties and use a single image content measure. In contrast, the technique presented in this paper is
able to adapt multiple aspects of the kernel size and shape automatically according to multiple local image content
measures which identify pertinent features across the scene. Example results which demonstrate the potential of the
technique for improving image quality are presented. It is demonstrated that the proposed approach provides superior
noise reduction capabilities over conventional filtering approaches on a local and global scale according to performance
measures such as Root Mean Square Error, Mutual Information and Structural Similarity. The proposed technique has
also been implemented on a Commercial Off-the-Shelf Graphical Processing Unit platform and demonstrates excellent
performance in terms of image quality and speed, with real-time frame rates exceeding 100Hz. A novel method which is
employed to help leverage the gains of the processing architecture without compromising performance is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new approach of multi-class target recognition is proposed for remote sensing image analysis. A multiclass
feature model is built, which is based on sharing features among classes. In order to make the recognition process
efficient, we adopted the idea of adaptive feature selection. In each layer of the integrated feature model, the most salient
and stable feature are selected first, and then the less ones. Experiments demonstrated the approach proposed is efficient
in computation and is adaptive to scene variation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The reliability of the data analysis from sensors is the main factor for making the right decisions for target
recognition. Obviously, the reliability depends on the quality of the sensors and processing electronics. However, the
cases where the target is clearly determined are not numerous. More often, we receive partial and distorted images from
the sensors employed. Thus, we have a task of determining the correct initial image that was distorted by various factors
before and after it was detected by the sensors. The proposed approach is an adaptive intelligent system that uses
algorithms, renewable data base as well as the possibility of changing the detection system's parameters and modes of
operation depending on the signal received from the identified objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Correlation filters (CFs) can detect multiple targets in one scene making them well-suited for automatic target
recognition (ATR) applications. Quadratic CFs (QCFs) can improve performance over linear CFs. QCFs are able
to detect one class of targets and reject clutter. We present a method to increase the QCF capabilities to detect
two classes of targets and reject clutter. We integrate the ATR tasks of detection, recognition, and tracking
algorithms using the Multi-Frame Correlation Filter (MFCF) framework. Our simulation results demonstrate
the algorithm's ability to detect multiple targets from two classes while rejecting clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A research area based on the application of information theory to machine learning has attracted considerable interest in
the last few years. This research area has been coined information-theoretic learning within the community. In this paper
we apply elements of information-theoretic learning to the problem of automatic target recognition (ATR). A number of
researchers have previously shown the benefits of designing classifiers based on maximizing the mutual information
between the class data and the class labels. Following prior research in information-theoretic learning, in the current
results we show that quadratic mutual information, derived using a special case of the more general Renyi's entropy, can
be used for classifier design. In this implementation, a simple subspace projection classifier is formulated to find the
optimal projection weights such that the quadratic mutual information between the class data and the class labels is
maximized. This subspace projection accomplishes a dimensionality reduction of the raw data set wherein information
about the class membership is retained while irrelevant information is discarded. A subspace projection based on this
criterion preserves as much class discriminability as possible within the subspace. For this paper, laser radar images are
used to demonstrate the results. Classification performance against this data set is compared for a gradient descent MLP
classifier and a quadratic mutual information MLP classifier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are many applications for which it is important to resolve the location and motion of a target position.
For the static situation in which a target transmitter and several receivers are not in motion, the target may be
completely resolved by triangulation using relative time delays estimated by several receivers at known locations.
These delays are normally estimated from the location of peaks in the magnitude of the cross-correlation function.
For active radars, a transmitted signal is reflected by the target, and range and radial velocity are estimated
from the delay and Doppler effects on the received signal. In this process, Doppler effects are conventionally
modeled as a shift in frequency, and delay and Doppler are estimated from a cross-ambiguity function (CAF)
in which delay and Doppler frequency shift are assumed to be independent and approximately constant. Delay
and Doppler are jointly estimated as the location of the peak magnitude of the CAF plane. We present methods
for accurately estimating delay for the static case and delay and the time-varying Doppler effects for non-static
models, such as the radar model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some marine mammals as well as bats are known to emit sophisticated waveforms while searching for objects or hunting prey. Some dolphins have been observed to change their sonar pulse depending on the environment. Incorporating these strategies into sonar waveform and receiver design has become an active area of research. In this paper, we explore the application of an optimal waveform design scheme recently given by Kay, to the detection of elastic objects. We examine the benefits of optimal waveform design versus transmitting a linear FM waveform, as well as performance loss suffered by assuming a point target. The optimization approach designs the magnitude spectrum of the transmit waveform and, accordingly, there is an unlimited number of "optimal" transmit waveforms with the same magnitude spectrum. We propose a time domain optimization criterion to obtain the transmit waveform with the optimal magnitude spectrum and the smallest possible duration, as well as the waveform with the optimal magnitude spectrum and the longest possible duration. The former waveform allows for higher ping rates, but necessarily has higher time domain peak power, while the latter waveform has lower time domain peak power and lower ping rates. A method to obtain waveforms that are a blend of these two extremes is also presented, allowing a smooth trade-off between ping rate and peak power.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For some time, applying the theory of pattern recognition and classification to radar signal processing has been
a topic of interest in the field of remote sensing. Efficient operation and target indication is often hindered by
the signal background, which can have similar properties with the interesting signal. Because noise and clutter
may constitute most part of the response of surveillance radar, aircraft and other interesting targets can be seen
as anomalies in the data. We propose an algorithm for detecting these anomalies on a heterogeneous clutter
background in each range-Doppler cell, the basic unit in the radar data defined by the resolution in range, angle
and Doppler. The analysis is based on the time history of the response in a cell and its correlation to the
spatial surroundings. If the newest time window of response in a resolution cell differs statistically from the
time history of the cell, the cell is determined anomalous. Normal cells are classified as noise or different type of
clutter based on their strength on each Doppler band. Anomalous cells are analyzed using a longer time window,
which emulates a longer coherent illumination. Based on the decorrelation behavior of the response in the long
time window, the anomalous cells are classified as clutter, an airplane or a helicopter. The algorithm is tested
with both experimental and simulated radar data. The experimental radar data has been recorded in a forested
landscape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previously we have given explicit expressions for the moments of a pules propagating in a dispersive
medium. Liu and Yeh have given the moments for a pulse in a random medium with no dispersion.
In this paper we derive the time moments of a pulse propagating in a random medium with
dispersion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modeling of the electromagnetic (EM) scattering mechanisms from two-dimensional (2-D) time-evolving sea surfaces is
a particularly complicated problem. The intricate structure of surface waves and the scattering models noticeably
influence the simulated radar signatures. Scattering calculation and Doppler spectra from the sea surfaces have been
intensively studied, experimentally as well as theoretically in the past decades. However, to author's knowledge, very
few results can be found in literatures for Doppler spectra from two-dimensional time-evolving nonlinear sea surfaces.
In this work we focus on the Doppler spectral characteristics from 2-D time-evolving nonlinear sea surfaces. Based on
Creamer's sea surface model, the first-order small slope approximation (SSA) method is applied to solve the 3-D
scattering problem. The Doppler spectra of the backscattered signals from 2-D time-evolving sea surfaces are studied for
different incident angles (from normal to grazing) as well as wind directions (from upwind to crosswind). The impacts of
the nonlinearity on the Doppler shifts and spectral widths of backscattered signals are analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future target acquisition missions of military aircraft require a robust classification and tracking system of ground
targets. In combination with onboard ATD systems, a fast "Find, Fix, Track" cycle for airborne platforms using EO, IR
and SAR imaging sensors should be achieved. For EO/IR image sequences, a 3D matching and pose estimation method
was developed by EADS internal research. The approach determines the resemblance between rendered 3D CAD models
and sensor images to identify the best-matching object pose by optimizing different similarity measures. In order to
assess the suitability of this method for real-world military aircraft missions, the present paper introduces a number of
robustness requirements w.r.t. sensors, scenarios, object classes and environmental conditions and systematically
evaluates the proposed method on a set of image sequences ranging from purely synthetic over laboratory conditions to
real-world recordings in a rapid prototyping environment using graphics cards acceleration techniques. The outlook
shows possible extensions of the system e.g. tracking and hypothesis management modules as well as the necessary steps
to implement and integrate the selected method into a real-time embedded onboard mission system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D imagery has a well-known potential for improving situational awareness and battlespace visualization by
providing enhanced knowledge of uncooperative targets. This potential arises from the numerous advantages
that 3D imagery has to offer over traditional 2D imagery, thereby increasing the accuracy of automatic target
detection (ATD) and recognition (ATR). Despite advancements in both 3D sensing and 3D data exploitation,
3D imagery has yet to demonstrate a true operational gain, partly due to the processing burden of the massive
dataloads generated by modern sensors. In this context, this paper describes the current status of a workbench
designed for the study of 3D ATD/ATR. Among the project goals is the comparative assessment of algorithms
and 3D sensing technologies given various scenarios. The workbench is comprised of three components: a
database, a toolbox, and a simulation environment. The database stores, manages, and edits input data of
various types such as point clouds, video, still imagery frames, CAD models and metadata. The toolbox features
data processing modules, including range data manipulation, surface mesh generation, texture mapping, and
a shape-from-motion module to extract a 3D target representation from video frames or from a sequence of
still imagery. The simulation environment includes synthetic point cloud generation, 3D ATD/ATR algorithm
prototyping environment and performance metrics for comparative assessment. In this paper, the workbench
components are described and preliminary results are presented. Ladar, video and still imagery datasets collected
during airborne trials are also detailed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser-based 3D sensors measure range with high accuracy and allow for detection of objects behind various type of
occlusion, e.g., tree canopies. Range information is valuable for detection of small objects that are typically represented
by 5-10 pixels in the data set. Range information is also valuable in tracking problems when the tracked object is
occluded under parts of its movement and when there are several objects in the scene. In this paper, on-going work on
detection and tracking are presented. Detection of partly occluded vehicles is discussed. To detect partly occluded
objects we take advantage of the range information for removing foreground clutter. The target detection approach is
based on geometric features, for example local surface detection, shadow analysis and height-based detection. Initial
results on tracking of humans are also presented. The benefits with range information are discussed. Results are
illustrated using outdoor measurements with a 3D FLASH LADAR sensor and a 3D scanning LADAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition (ATR) based on the emerging technology of Compressed Sensing (CS) can considerably improve accuracy, speed and cost associated with these types of systems. An image based ATR algorithm has been built upon this new theory, which can perform target detection and recognition in a low dimensional space. Compressed dictionaries (A) are formed to include rotational information for a scale of interest. The algorithm seeks to identify
y(test sample) as a linear combination of the dictionary elements : y=Ax, where A ∈ Rnxm(n<<m) and x is a sparse vector whose non-zero entries identify the input y. The signal x will be sparse with respect to the dictionary A as long as y is a valid target. The algorithm can reject clutter and background, which are part of the input image. The detection and recognition problems are solved by finding the sparse-solution to the undetermined system y=Ax via Orthogonal Matching Pursuit (OMP) and l1 minimization techniques.
Visible and MWIR imagery collected by the Army Night Vision and Electronic Sensors Directorate (NVESD) was
utilized to test the algorithm. Results show an average detection and recognition rates above 95% for targets at ranges
up to 3Km for both image modalities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Subspace projection is an effective and established way to form classes in the Automatic Target Acquisition (ATA)
problem. Class subspace formation is viewed in this paper as an over specified F h = u problem. Recent advances in
compressive imaging show that this problem can be solved for sparse matrices via iterative techniques. Convergence of
these techniques is aided by a metric induced by an appropriately selected norm. In this paper we will use infrared data
to show this rapid class formation and to compare convergence for two norms. Based on this class formulation a new
method for ATA solution will also be demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we develop a framework for using only the needed data for automatic target recognition (ATR)
algorithms using the recently developed theory of sparse representations and compressive sensing (CS). We show
how sparsity can be helpful for efficient utilization of data, with the possibility of developing real-time, robust
target classification. We verify the efficacy of the proposed algorithm in terms of the recognition rate on the
well known Comanche forward-looking infrared (FLIR) data set consisting of ten different military targets at
different orientations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of a more unified theory of automatic target recognition (ATR) has received considerable attention over
the last several years from individual researchers, working groups, and workshops. One of the major benefits expected
to accrue from such a theory is an ability to analytically derive performance metrics that accurately predict real-world
behavior. Numerous sources of uncertainty affect the actual performance of an ATR system, so direct calculation has been
limited in practice to a few special cases because of the practical difficulties of manipulating arbitrary probability distributions
over high dimensional spaces. This paper introduces an alternative approach for evaluating ATR performance based
on a generalization of NorbertWiener's polynomial chaos theory. Through this theory, random quantities are expressed not
in terms of joint distribution functions but as convergent orthogonal series over a shared random basis. This form can be
used to represent any finite-variance distribution and can greatly simplify the propagation of uncertainties through complex
systems and algorithms. The paper presents an overview of the relevant theory and, as an example application, a discussion
of how it can be applied to model the distribution of position errors from target tracking algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new wavelet-based anomaly detection technique for Forward Looking Infrared (FLIR)
sensor consisting a Long-wave (LW) and a Mid-wave (MW) sensor. The proposed approach called wavelet-RX
algorithm consists of a combination of a two-dimensional (2-D) wavelet transform and the well-known multivariate
anomaly detector called the RX algorithm. In our wavelet-RX algorithm, a 2-D wavelet transform is first applied
to decompose the input image into uniform subbands. A number of significant subbands (high energy subbands)
are concatenated together to form a subband-image cube. The RX algorithm is then applied to each subbandimage
cube obtained from wavelet decomposition of LW and MW sensor data separately. Experimental results
are presented for the proposed wavelet-RX and the classical CFAR algorithm for detecting anomalies (targets)
in a single broadband FLIR (LW or MW) sensors. The results show that the proposed wavelet-RX algorithm
outperforms the classical CFAR detector for both LW and for MW FLIR sensors data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research paper investigates the use of the wavelet transform to extract spatially-invariant wavelet-based
shape signatures for automatic target recognition (ATR). Target signatures based on shape information can
be generally categorized as either contour-based or region-based. The wavelet-based shape signatures facilitate
detection and localization of important edge and texture information aiding in discrimination between targets.
To demonstrate the advantages of both edge and region information, we present an approach that combines
region-based shape methods and the wavelet transform for generating target signatures. Our approach generates
a rotationally invariant class of wavelet signatures based on the spatial ground pixel coverage of the target
is determined from the region-of-interest (ROI) in the wavelet domain. This process results in a multiresolution
representation of the target, and provides a hierarchical approach to target signature-matching. We
demonstrate this methodology using signatures from aircraft targets utilizing the Angular Radial Transform
(ART) as the region-based shape signature. Region-based signatures are shown more robust than contourbased
signatures in the presence of noise and disconnected target regions providing greater confidence in target
identification. Our research results show the value of combining the rotational invariance of the ART signatures
with the localization and edge discrimination properties of the wavelet transform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated target cueing (ATC) can assist analysts with searching large volumes of imagery. Performance of most
automated systems is less than perfect, requiring an analyst to review the results to dismiss false alarms or confirm
correct detections. This paper explores methods for improving the presentation and visualization of the ATC output,
enabling more efficient and effective review of the detections flagged by the ATC. The techniques presented in this
paper are applicable to a wide range of search problems using data from different sensors modalities. The
information available to the computer increases as ATC detections are either accepted or rejected by the analyst. It
is often easy to confirm obviously correct detections and dismiss obvious false alarms, which provides the starting
point for the automated updating of the visualization. In machine learning algorithms, this information can be used
to retrain or refine the classifier. However, this retraining process is appropriate only when future sensor data is
expected to closely resemble the current set. For many applications, the sensor data characteristics (viewing
geometry, resolution, clutter complexity, prevalence and types of confusers) are likely to change from one data
collection to the next. For this reason, updating the visualization for the current data set, rather than updating the
classifier for future processing, may prove more effective. This paper presents an adaptive visualization technique
and illustrates the technique with applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The global war on terror has plunged US and coalition forces into a battle space requiring the continuous adaptation of
tactics and technologies to cope with an elusive enemy. As a result, technologies that enhance the intelligence,
surveillance, and reconnaissance (ISR) mission making the warfighter more effective are experiencing increased interest.
In this paper we show how a new generation of smart cameras built around foveated sensing makes possible a powerful
ISR technique termed Cascaded ATR. Foveated sensing is an innovative optical concept in which a single aperture
captures two distinct fields of view. In Cascaded ATR, foveated sensing is used to provide a coarse resolution,
persistent surveillance, wide field of view (WFOV) detector to accomplish detection level perception. At the same time,
within the foveated sensor, these detection locations are passed as a cue to a steerable, high fidelity, narrow field of view
(NFOV) detector to perform recognition level perception. Two new ISR mission scenarios, utilizing Cascaded ATR, are
proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral variability remains a challenging problem for target detection in hyperspectral (HS) imagery. In this paper, we
have applied the kernel-based support vector data description (SVDD) to perform full-pixel target detection. In target
detection scenarios, we do not have a collection of samples characterizing the target class; we are typically given a pure
target signature that is obtained from a spectral library. In our work, we use the pure target signature and first-order
Markov theory to generate N samples to model the spectral variability of the target class. We vary the value of N and
observe its effect to determine a value of N that provides acceptable detection performance.
We have inserted target signatures into an urban HS scene with varying levels of spectral variability to explore the
performance of the proposed SVDD target detection scheme in these scenarios. The proposed approach makes no
assumptions regarding the underlying distribution of the scene data as do traditional stochastic detectors such as the
adaptive matched filter (AMF). Detection results in the form of confusion matrices and receiver-operating-characteristic
(ROC) curves demonstrate that the proposed SVDD-based scheme is highly accurate and yields higher true positive rates
(TPR) and lower false positive rates (FPR) than the AMF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral variability remains a challenging problem for target detection in hyperspectral (HS) imagery. In previous work,
we developed a target detection scheme using the kernel-based support vector data description (SVDD). We constructed
a first-order Markov-based Gaussian model to generate samples to describe the spectral variability of the target class.
However, the Gaussian-generated samples also require selection of the variance parameter σ 2 that dictates the level of
variability in the generated target class signatures. In this work, we have investigated the use of decision-level fusion
techniques for alleviating the problem of choosing a proper value of σ 2 . We have trained a collection of SVDDs with
unique variance parameters σ 2 for each of the target training sets and have investigated their combination using the
traditional AND, OR, and majority vote (MV) decision-level rules. We have inserted target signatures into an urban HS
scene with differing levels of spectral variability to explore the performance of the proposed scheme in these scenarios.
Experiments show that the MV fusion rule is the best choice, providing relatively low false positive rates (FPR) while
yielding high true positive rates (TPR). Detection results show that the proposed SVDD-based decision-level scheme
using the MV fusion rule is highly accurate and yields higher true positive rates (TPR) and lower false positive rates
(FPR) than the adaptive matched filter (AMF).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An essential component within most approaches used to evaluate ATR algorithm performance is an image database from
which are chosen a training set of images. Several fundamental questions arise regarding the adequacy of the database to
represent the desired domain of effectiveness, the sufficiency of the training set, potentiality of enhancing the
constituents of the training set, suitability to determine signal-to-clutter performance, and realism of fairly comparing
performance of ATR algorithms to one another. These questions have been addressed through an investigation into a
unified approach for database analysis and how it can be applied to evaluating ATR performance metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Correlation filters (CF) have been widely used for detecting and recognizing patterns in 2-D images. These
filters are designed to yield sharp correlation peaks for desired objects while exhibiting low response to clutter
and background. CFs are designed using training images that resemble the object of interest. However it is not
clear what should be the background of these training images. Some methods use a white background while
others use the mean value of the target region. It is important to determine an appropriate background since
a mismatched background may cause the filter to discriminate based on the background rather than the target
pattern. In this paper we discuss a method to choose training images, and we compare the effects of different
backgrounds on the filter performance in different scenarios using both synthetic (pixels in the background
chosen from a Gaussian distribution) and real backgrounds (photographs of different sceneries) for testing. In
our comparisons we do not restrict ourselves to using a background with constant pixel intensity for training but
also include in the training images backgrounds with varying pixel intensity with mean and standard deviation
equal to the mean and standard deviation of the target region. Experiments show that without a prior knowledge
of the background in the testing images, training the filters using a background with the mean and variance of
all the desired objects tends to give better results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Spectral and Polarimetric Imagery Collection Experiment (SPICE) is a collaborative effort between the US Army
ARDEC and ARL that is focused on the collection of mid-wave and long-wave infrared imagery using hyperspectral,
polarimetric, and broadband sensors.
The objective of the program is to collect a comprehensive database of the different modalities over the course of 1 to 2
years to capture sensor performance over a wide variety of weather conditions, diurnal, and seasonal changes inherent to
Picatinny's northern New Jersey location.
Using the Precision Armament Laboratory (PAL) tower at Picatinny Arsenal, the sensors will autonomously collect the
desired data around the clock at different ranges where surrogate 2S3 Self-Propelled Howitzer targets are positioned at
different viewing perspectives in an open field. The database will allow for: 1) Understanding of signature variability
under adverse weather conditions; 2) Development of robust algorithms; 3) Development of new sensors; 4) Evaluation
of polarimetric technology; and 5) Evaluation of fusing the different sensor modalities.
In this paper, we will present the SPICE data collection objectives, the ongoing effort, the sensors that are currently
deployed, and how this work will assist researches on the development and evaluation of sensors, algorithms, and fusion
applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stochastic resonance has received significant attention recently in the signal processing community with emphasis on
signal detection. The basic notion is that the performance of some suboptimal detectors can be improved by adding
independent noise to the measured (and already noise contaminated) observation. The notion of adding noise makes
sense if the observation is the result of nonlinear processing, and there exist proven scenarios where the signal-to-noise
ratio improves by adding independent noise. This paper reviews a set of parametric and nonparametric sub-optimal radar
target classification systems and explores (via computer simulation) the impact of adding independent noise to the
observation on the performance of such sub-optimal systems. Although noise is not added in an optimal fashion, it does
have an impact on the probability of classification error. Real radar scattering data of commercial aircraft models is used
in this study. The focus is on exploring scenarios where added noise may improve radar target classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a method and system of human-like attention and object segmentation in visual scenes that (1)
attends to regions in a scene in their rank of saliency in the image, (2) extracts the boundary of an attended proto-object
based on feature contours, and (3) can be biased to boost the attention paid to specific features in a scene, such as those
of a desired target object in static and video imagery. The purpose of the system is to identify regions of a scene of
potential importance and extract the region data for processing by an object recognition and classification algorithm. The
attention process can be performed in a default, bottom-up manner or a directed, top-down manner which will assign a
preference to certain features over others. One can apply this system to any static scene, whether that is a still photograph
or imagery captured from video. We employ algorithms that are motivated by findings in neuroscience, psychology, and
cognitive science to construct a system that is novel in its modular and stepwise approach to the problems of attention
and region extraction, its application of a flooding algorithm to break apart an image into smaller proto-objects based on
feature density, and its ability to join smaller regions of similar features into larger proto-objects. This approach allows
many complicated operations to be carried out by the system in a very short time, approaching real-time. A researcher
can use this system as a robust front-end to a larger system that includes object recognition and scene understanding
modules; it is engineered to function over a broad range of situations and can be applied to any scene with minimal
tuning from the user.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an algorithm and system for rapidly generating a saliency map and finding interesting regions and
in large-sized (i.e., extremely high-resolution) imagery and video. Previous methods of finding salient or interesting
regions have a fundamental shortcoming: they need to process the entire image before the saliency map can be outputted
and are therefore very slow for large images. Any prior attempts at parallelizing this operation involve computing feature
maps on separate processors, but these methods cannot provide a result until the entire image has been processed. Rather
than employing a single-step process, our system uses a recursive approach to estimate the saliency, processing parts of
the image in sequence and providing an approximate saliency map for these regions immediately. With each new part of
the image, a series of normalization factors is updated that connects all image parts analyzed so far. As more of the
image parts are analyzed, the saliency map of the previously analyzed parts as well as newly analyzed parts becomes
more exact. In the end, an exact global saliency map of the entire image is available. This algorithm can be viewed as (1)
a fast, parallelizable version of prior art, and/or (2) a new paradigm for computing saliency in large imagery/video. This
is critical, as the analysis of large, high-resolution imagery becomes more commonplace. This system can be employed
in a default, bottom-up manner or a directed, top-down manner which will assign a preference to certain features over
others. One can apply this system to any static scene, whether that is a still photograph or an image captured from video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper illustrates an approach to sequential hypothesis testing designed not to minimize the amount of data collected
but to reduce the overall amount of processing required, while still guaranteeing pre-specified conditional probabilities
of error. The approach is potentially useful when sensor data are plentiful but time and processing capability are
constrained. The approach gradually reduces the number of target hypotheses under consideration as more sensor data
are processed, proportionally allocating time and processing resources to the most likely target classes. The approach is
demonstrated on a multi-class ladar-based target recognition problem and compared with uniform-computation tests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Air Force Institute of Technology's Center for Directed Energy's (AFIT/CDE), under sponsorship of the HEL Joint
Technology Office, and as part of a multidisciplinary research initiative on aero optics effects, has designed and
fabricated a laser pointing/tracking system. This system will serve as the laser source for a series of in-flight data
collection campaigns involving two aircraft.
Real-Time tracking systems have a distinct difference from automatic image analysis. Both activities often involve the
segmentation of an image and the automatic location of an item of interest. A number of advanced tracking algorithms
have been developed for applications involving processing previously captured data. Medical imaging applications
frequently use post processing algorithms to segment anomalies in medical imaging.
In this paper we discuss an airborne laser pointing and tracking system and its requirements, designed and implemented
at AFIT. This application is different because the image processing must be completed during the inter-frame period.
AFIT analyzed available tracking algorithms including: centroid tracking, Fitts correlator, Posterior Track, and Active
Contour. These algorithms were evaluated on their ability to both accurately track and to be computed in real time using
existing hardware.
The analysis shows that some of the more accurate tracking algorithms are not easily implementable in real time. Often
there are large numbers of correlations that must be computed for each frame. Higher resolution images quickly escalate
this problem. Algorithm selection for tracking applications must balance the need for accuracy and computational
simplicity.
Real time tracking algorithms are limited by the amount of time between frames with which to processes the data.
Specialized hardware can improve this situation. We selected centroid tracking for the airborne application and evaluate
its performance to show that it meets design requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For long-range imaging or low signal-to-noise ratio environments sightline jitter is a primary source of image
degradation. The conventional pointing & stabilization figure of merit is therefore the jitter RMS, with bearing friction
often the largest contributor overall. Recent work has shown that pixel smear during camera integration can be reduced if
adaptive friction compensation 'shapes' the jitter frequency content in addition to reducing the RMS value. This paper
extends this work by automating the tuning process for the sightline control parameters by using a genetic algorithm. The
GA fitness metric is the integral of the modulation transfer function due to any residual sightline jitter. It is shown that
this fitness function is significantly better than the current root-mean-square figure of merit typically employed in
stabilization loop design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, the author solved a general problem of fusing asynchronous tracks while taking
into account communicatrion delays, data latency, and out of sequence tracks. The objective of
this paper is to perform preliminary performance analysis of the asynchronous track fusion
algorithm. In this study, two snesors providing asynchronous measurements are considered,
where the update track fusion rate is fixed. Communication delay between at least of the senors'
platform and the fusion center may exit. Monte Carlo simulations are performed using simulated
traget tracks. The performance of the individual sensors as well as that of the fused track is
provided. The preliminary results show the benefit of track fusion under more realistic
assumptions than what is currently the practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An adaptive image pre-processor has been developed for a next-generation video tracking system. In a previous paper
we presented a wavelet-based enhancement pre-processor (AWEP) which showed good segmentation capability. Here
we discuss the impact of structural and implementation constraints placed on the algorithm during targeting on a low
power FPGA device. We discuss the underlying issues and outline our approach to compensating the effect and
regaining stability. Output results are given illustrating the segmentation performance after applied optimization of the
decomposition filter kernels. A set of results from the tracking system are presented to demonstrate the effectiveness of
the AWEP implementation on the tracking performance applied to real video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Structural dynamics is one of the most important elements of a precision gimbal design which often dominates the system line-ofsight
stabilization and pointing performance. Structural effects are manifested in these systems in several unrelated ways that the
systems engineer, and other members of the design team, must understand in order to insure a successful design. Once the effects are
clearly understood, analysis techniques, such as finite elements, can be applied to provide models to accurately predict the various
interactions and evaluate potential designs. Measurement techniques such as modal analysis can also be used to obtain models of
existing hardware and to verify the design. However, the successful and efficient application of the above process requires that the
underlying principles and effects are well understood by all the members of the engineering design team. This usually includes, as a
minimum, the control systems engineer, the structural analyst and the mechanical engineer but may involve other members of the
design team as well. Appropriate transfer functions for the various interactions, for example, can be defined and provided by the
structural analyst to the control system engineer to evaluate and performance predictions can be iterated as necessary until the entire
system meets the required performance in the intended dynamic environment. Often, however, one or more members of the team do
not have an appreciation for the effects or design process required and the result is a frustrated design effort and lower system
performance that might have otherwise been easily achieved. While different systems can have vastly different requirements and
configurations, the above effects and techniques are common to most and this paper is an attempt to provide a straightforward outline
of the more common of these in order to improve communication among design team members so that they can all contribute at their
maximum potential.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes some design and implementation aspects of a low-cost inertial estimation unit based on
comercially available inertial sensors. The primary task for the unit described in this paper is to estimate the
attitude (orientation, pose), but the extension to estimating the position and height is planned. The size of the
unit is about the size of a handheld device. It includes a commercially available 3-axis rate gyro combined in a
single package with a 3-axis accelerometer and a 3-axis magnetometer. In order to include position estimation
capabilities a GPS receiver is attached and a barometric pressure sensor can be added. The primary limitation
of the implementation described in this paper is that it assumes no long term acceleration of the carrier (neither
along a linear nor along a curved path), which makes the result of less value in aerospace industry but may have
some appeal to researcher engineers in other fields. The data measured by the three sensors are fused using
the Extended Kalman filtering paradigm. No model of the dynamics of the carrier (aircraft, mobile robot or a
patient) is relied upon, the only modeled dynamics is that of sensors, such as the bias and noise. The choice of
extended Kalman filtering methodology was dictated by strong requirements on computational simplicity. Some
experience with implementation of the proposed scheme on a digital hardware (ARM7 based microcontroller)
is shared in the paper. Finally, functionality of the presented device is demonstrated in experiments. Besides
simple indoor tests, fly experiments were conducted using a small UAV helicopter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper reports on a few control engineering issues related to design and implementation of an image-based pointing and tracking system for an inertially stabilized airborne camera platform. A medium-sized platform has been developed by the authors and a few more team members within a joint governmental project coordinated
by Czech Air Force Research Institute. The resulting experimental platform is based on a common double gimbal configuration with two direct drive motors and off-the-shelf MEMS gyros. Automatic vision-based tracking system is built on top of the inertial stabilization. Choice of a suitable control configuration is discussed first, because the decoupled structure for the inner inertial rate controllers does not extend easily to the outer imagebased
pointing and tracking loop. It appears that the pointing and tracking controller can benefit much from availability of measurements of an inertial rate of the camera around its optical axis. The proposed pointing and tracking controller relies on feedback linearization well known in image-based visual servoing. Simple compensation of a one sample delay introduced into the (slow) visual pointing and tracking loop by the computer vision system is proposed. It relies on a simple modification of the well-known Smith predictor scheme where the prediction takes advantage of availability of the (fast and undelayed) inertial rate measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Precision tracking applications using two-axis gimbal or antenna actuation systems suffer from a singularity when the
inner axis reaches +-90 degrees. This is known by various terms - the keyhole singularity, gimbal lock or the nadir
problem. Practically, sightline control is degraded and often lost in a neighborhood of this singularity. In this paper, two
nonlinear control algorithms are applied to sightline pointing and stabilization control in the neighborhood of the nadir;
the traditional cosecant correction and the nonlinear generalized minimum variance technique. Both controllers were
tested against a validated model of an Aeromech TigerEye turret.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simple analytical model for the signal acquisition range for a laser guided mortar is presented. The signal consists of a
repetitively pulsed laser of fixed pulse duration and fixed pulse repetition frequency. The pulses are detected by a seeker
consisting of a quadrant photodiode and a trans-impedance amplifier. Noise is introduced from solar irradiance and
from the detector/amplifier electronics. The model maximizes the acquisition range by optimizing trans-impedance
amplifier circuit components. A comparison of integrating multiple low energy pulses (MPLD) versus detecting each
pulse individually (conventional) is made.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-beam technology is one of the key technologies in optical phased array systems for multi-object treatment and
multi-task operation. A multi-beam forming and steering method was proposed. This method uses isosceles triangle
multilevel phase grating (ITMPG) to form multiple beams simultaneously. Phase profile of the grating is a quantized
isosceles triangle with stairs. By changing the phase difference corresponding to the triangle height, multiple beams can
be steered symmetrically. It took 34 ms to calculate a set of parameters for one ITMPG, namely one steering. A liquid
crystal spatial light modulator was used for the experiment, which formed 6 gratings. The distortion of which had been
compensated with the accuracy of 0.0408 λ. Each grating included 16 phase elements with the same period. Steering
angle corresponded to the triangle height, which is the phase difference. Relative diffraction efficiency for multiple
beams was greater than 81%, intensity nonuniformity was less than 0.134, and the deflection resolution was 2.263 mrad.
Experimental results demonstrate that the proposed method can be used to form and steer symmetrical multiple beams
simultaneously with the same intensity and high diffraction efficiency in the far field, the deflection resolution is related
to the reciprocal of grating period.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Volume holographic correlators offer the ability to encode and compare thousands of templates in one operation.
Angle multiplexing of each individual template means the position of the correlation spot in the output plane
corresponds to the matching template. To be useful as a correlator the shift invariance must be restored by
scanning the input image. This can be achieved by implementing the input signal modulation on a high speed
SLM such as a MQW or DLP that is capable speeds in excess of 30kHz. The output correlation peak is read
out using a high-speed linear CCD camera. The Bragg angle affects the number of templates that can be
held on the hologram. However, this is not the same in both directions and this changes the correlator's shift
invariance ability in different scan directions. In this paper we investigate this and how it affects the correlator's
performance. This arrangement allows thousands of templates to be searched at video rate. The scanning nature
allows space domain correlation to be implemented. The system we describe offers the ability to pre-filter the
signal. We report on the results of a MACH filter implemented in a volume holographic correlator. The scanning
window allows some interesting pre-filtering to be performed, such normalisation and non-linear optimisation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pattern recognition deals with the detection and identification of a specific target in an unknown input scene. Target
features such as shape, color, surface dynamics, and material characteristics are common target attributes used for
identification and detection purposes. Pattern recognition using multispectral (MS), hyperspectral (HS), and polarization-based
spectral (PS) imaging can be effectively exploited to highlight one or more of these attributes for more efficient
target identification and detection. In general, pattern recognition involves two steps: gathering target information from
sensor data and identifying and detecting the desired target from sensor data in the presence of noise, clutter, and other
artifacts. Multispectral and hyperspectral imaging (MSI/HSI) provide both spectral and spatial information about the
target. As the reflection or emission spectral signatures depend on the elemental composition of objects residing within
the scene, the polarization state of radiation is sensitive to the surface features such as relative smoothness or roughness,
surface material, shapes and edges, etc. Therefore, polarization information imparted by surface reflections of the target
yields unique and discriminatory signatures which could be used to augment spectral target detection techniques, through
the fusion of sensor data. Sensor data fusion is currently being used to effectively recognize and detect one or more of
the target attributes. However, variations between sensors and temporal changes within sensors can introduce noise in the
measurements, contributing to additional target variability that hinders the detection process. This paper provides a quick
overview of target identification and detection using MSI/HSI, highlighting the advantages and disadvantages of each. It
then discusses the effectiveness of using polarization-based imaging in highlighting some of the target attributes at single
and multiple spectral bands using polarization spectral imaging (PSI), known as spectropolarimetry imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new security system is proposed using optical joint transform correlation technique which employs multiple
phase-shifted reference images. In the proposed technique, the address code is used as the reference image and
fed into four channels after performing phase shifting on them by different amount. The output signals from
each channel are added with the input image to be encrypted for security purpose. Joint power spectra (JPS)
signals can then be derived by applying Fourier transformation, and the resultant signals are phase-shifted and
combined to form a modified JPS signal. Inverse Fourier transformation of the modified JPS signal yields the
encrypted image which is now secure from any unauthorized access and/or loss of information. For decryption
purpose, the received encrypted signal is first Fourier transformed and multiplied by the address code used in
encryption, which is then inverse Fourier transformed to generate the output signal. The proposed technique
does not involve any complex mathematical operation on the address code otherwise required in other security
techniques. The proposed technique requires a simple architecture and operates fast, automatic and is invariant
to noise and distortions. Performance of the proposed scheme is investigated using computer simulation using
binary as well as gray images in both noise-free and noisy conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photorefractive Materials and Color in Image Processing Applications
In this paper, we exploit the nonlinearity inherent in four-wave mixing in organic photorefractive materials and
demonstrate edge enhancement, contrast conversion, and defect enhancement in a periodic structure. With the
availability of these materials, which have large space-bandwidth products, edge enhancement, contrast conversion and
defect enhancement are possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we demonstrate image restoration via photorefractive two-beam coupling. Our restoration is based on
coupling between the joint spectra of the distortion impulse response and the distorted image, and the clean reference
beam. The image restoration is used to demonstrate one-way image transmission in an aberrating medium. Our
experimental demonstration is supported by theoretical modeling of the restoration process and by computer modeling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bidimensional empirical mode decomposition (BEMD) decomposes an image into several bidimensional intrinsic
mode components, which is useful for various image enhancement and/or feature extraction applications. However,
because of the requirement of scattered data interpolation and associated difficulties, the classical BEMD
methods appear unsuitable for many applications. Recently, a fast and adaptive BEMD (FABEMD) method
is proposed, which alleviates some of the difficulties, otherwise encountered in classical BEMD approaches. On
the other hand, existing BEMD methods are proposed for gray scale images only. This paper first presents a
novel BEMD approach for color images known as color BEMD (CBEMD), which employs FABEMD principle
and decomposes a color image into color bidimensional intrinsic mode components based on hierarchical local
spatial variation of image intensity and color. In fact, FABEMD facilitates the extension of the BEMD process
for color images in a convenient and useful way, whereas the other interpolation based BEMD techniques appear
unsuitable for this purpose. In FABEMD, order statistics filters are employed to estimate the envelope surfaces
from the data instead of surface interpolation, which enables fast decomposition and well characterized bidimensional
intrinsic mode components. Second, the CBEMD is utilized in this paper for adjusting and/or modifying
the trend of color images. In this process, the image is reconstructed by adding the color bidimensional intrinsic
mode components after applying suitably selected weights. Test results with real images demonstrate the
potential of the proposed CBEMD method for color image processing, which include color trend adjustment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distortion Invariant Filters, Optical Correlators, and New Pattern Recognition Applications
A moving space domain window is used to implement a Maximum Average Correlation Height (MACH) filter which
can be locally modified depending upon its position in the input frame. This enables adaptation of the filter dependant on
locally variant background clutter conditions and also enables the normalization of the filter energy levels at each step.
Thus the spatial domain implementation of the MACH filter offers an advantage over its frequency domain
implementation as shift invariance is not imposed upon it. The only drawback of the spatial domain implementation of
the MACH filter is the amount of computational resource required for a fast implementation. Recently an optical
correlator using a scanning holographic memory has been proposed by Birch et al [1] for the real-time implementation of
space variant filters of this type. In this paper we describe the discrimination abilities against background clutter and
tolerance to in-plane rotation, out of plane rotation and changes in scale of a MACH correlation filter implemented in the
spatial domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A practical challenge that designers of hyperspectral (HS) target detection algorithms must confront is the variety of
spectral sampling properties exhibited by various HS imaging sensors. Examples of these variations include different
spectral resolutions and the possibility of regular or irregular sampling. To confront this problem, we propose
construction of a spectral synthetic discriminant signature (SSDS). The SSDS is constructed from q spectral training
signatures which are obtained by sampling the original target signature. Since the SSDS is formulated offline, it does not
impose any burden on the processing speed of the recognition process. Results on our HS scenery show that use of the
SSDS in conjunction with the spectral fringe-adjusted joint transform correlation (SFJTC) algorithm provides spectrallyinvariant
target detection, yielding area under ROC curve (AUROC) values above 0.993.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Efficient recognition and clearance of subsurface land mine patterns has been one of the challenging humanitarian
and military tasks. Among the several subsurface land mine patterns recognition techniques available, passive
imaging techniques are more convenient, safer with good probability of recognition. There exist extensive
applications where the joint-transform correlation algorithms have been used for efficient pattern recognition.
However, among the several pattern recognition algorithms exist for subsurface land mines, the joint-transform
correlation ones has been underrepresented. This paper presents the application of an efficient wavelet-filter joint
transform correlation (WFJTC) algorithm for the recognition of passive imagery of subsurface land mines in highly
cluttered scenarios, using intensity and polarization-based imagery. We further improve the recognition efficiency of
the WFJTC proposing a combined optical-digital enhancement approach. Improvements will be justified using
correlation performance metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surveillance and its security applications have been critical subjects recently with various studies placing a high demand
on robust computer vision solutions that can work effectively and efficiently in complex environments without human
intervention. In this paper, an efficient illumination invariant template generation and tracking method to identify and
track abandoned objects (bags) in public areas is described. Intensity and chromaticity distortion parameters are initially
used to generate a binary mask containing all the moving objects in the scene. The binary blobs in the mask are tracked,
and those found static through the use of a 'centroid-range' method are segregated. A Laplacian of Gaussian (LoG) filter
is then applied to the parts of the current frame and the average background frame, encompassed by the static blobs, to
pick up the high frequency components. The total energy is calculated for both the frames, current and background,
covered by the detected edge map to ensure that illumination change has not resulted in false segmentation. Finally, the
resultant edge-map is registered and tracked through the use of a correlation based matching process. The algorithm has
been successfully tested on the iLIDs dataset, results being presented in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An Automated Target Recognition system (ATR) was developed to locate and target small object in images and
videos. The data is preprocessed and sent to a grayscale optical correlator (GOC) filter to identify possible regionsof-
interest (ROIs). Next, features are extracted from ROIs based on Principal Component Analysis (PCA) and sent
to neural network (NN) to be classified. The features are analyzed by the NN classifier indicating if each ROI
contains the desired target or not. The ATR system was found useful in identifying small boats in open
sea. However, due to "noisy background," such as weather conditions, background buildings, or water wakes, some
false targets are mis-classified. Feedforward backpropagation and Radial Basis neural networks are optimized for
generalization of representative features to reduce false-alarm rate. The neural networks are compared for their
performance in classification accuracy, classifying time, and training time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With excellent physical properties the photorefractive crystals, such as BSO (Bi12SiO20), BaTiO3 and GaAs materials, have, can be widely used in optical correlator to implement auto pattern recognition. As the basic devices in optical
correlator, the properties of optically-addressed spatial light modulator are very important. By analyzing the dynamic
process of the BSO spatial light modulator, especially the changes of the read-out light while in writing under various
operation modes, the distinctness between various operation modes is summarize. Furthermore, considered with the
photo-induced current pulses, the method to optimize the BSO spatial light modulator is proposed. The BSO spatial light
modulator working in optimum operation mode is used to design a optical correlator to implement auto pattern
recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.