Fast, accurate and robust automatic target recognition (ATR) in optical aerial imagery can provide game-changing
advantages to military commanders and personnel. ATR algorithms must reject non-targets with a high degree of
confidence in a world with an infinite number of possible input images. Furthermore, they must learn to recognize new
targets without requiring massive data collections. Whereas most machine learning algorithms classify data in a closed set
manner by mapping inputs to a fixed set of training classes, open set recognizers incorporate constraints that allow for
inputs to be labelled as unknown. We have adapted two template-based open set recognizers to use computer generated
synthetic images of military aircraft as training data, to provide a baseline for military-grade ATR: (1) a frequentist
approach based on probabilistic fusion of extracted image features, and (2) an open set extension to the one-class support
vector machine (SVM). These algorithms both use histograms of oriented gradients (HOG) as features as well as artificial
augmentation of both real and synthetic image chips to take advantage of minimal training data. Our results show that
open set recognizers trained with synthetic data and tested with real data can successfully discriminate real target inputs
from non-targets. However, there is still a requirement for some knowledge of the real target in order to calibrate the
relationship between synthetic template and target score distributions. We conclude by proposing algorithm modifications
that may improve the ability of synthetic data to represent real data.
Coherent change detection (CCD) provides a way for analysts and detectors to find ephemeral features that would otherwise be invisible in traditional synthetic aperture radar (SAR) imagery. However, CCD can produce false alarms in regions of the image that have low SNR and high vegetation areas. The method proposed looks to eliminate these false alarm regions by creating a mask which can then be applied to change products. This is done by utilizing both the magnitude and coherence feature statistics of a scene. For each feature, the image is segmented into groups of similar pixels called superpixels. The method then utilizes a training phase to model each terrain that the user deems as capable of supporting change and statistically comparing superpixels in the image to the modeled terrain types. Finally, the method combines the features using probabilistic fusion to create a mask that a user can threshold and apply to a change product for human analysis or automatic feature detectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.