Multi-sensor fusion algorithms combine information from different sensors to exceed the performance of a single sensor for a given task. In this work, we focus on fusing imagery from electro-optical (EO) and synthetic aperture radar (SAR) sensors for target identification. In addition to the imagery itself, large amounts of metadata or “side information” may be available as well. This data can include important characteristics about the operating conditions (OCs) under which the images were taken. On its own, this metadata is not useful for target identification. However, this extra information can potentially be leveraged to learn better representations of EO and SAR images and improve classification performance. In this work, we assume that side information is available only during training and leverage this information to build contextual deep representations of the target classes. At test time, we fuse the EO and SAR representations to classify the input images without accessing the metadata. We examine the impact of these OC-aware target representations on fusion performance under various forms of OC mismatch between training and testing and show that fusing models trained with side information improves classification accuracy when compared to classifiers trained without side information, especially under more significant train/test OC shifts. We also observe that the inclusion of side information may reduce the trained network’s capacity, which implies that side information introduces a regularizing effect. To further study this effect, we empirically compare our approach to classifiers trained with weight decay and bottleneck layers and find that our approach achieves higher accuracy, implying that the inclusion of side information has additional impacts on the learned representations beyond simple regularization.
Synthetic Aperture Radar is an all-weather sensor with many uses, including target recognition. We present work in train a network on synthetic SAR imagery for good performance on measured images. Previous work has used PCA decomposition to a dataset of synthetic and measured SAR imagery for image recognition with initially promising results. This work continues this line of research with kernel PCA using a number of kernels. These techniques are fit using synthetic SAR images, then the measured images are projected into the space at test time. Networks are trained on the lower dimension vectors from the synthetic images and tested on measured images. Performing dimensionality reduction in this way has applications for increased speed of network training and evaluation and in reducing the difference between synthetic and measured domains. We present the results on the publicly available SAMPLE dataset.
Deep learning is a technology applied to a host of problems in the decade since its introduction. Of particular interest for both defense and civil applications is the technology of automatic target recognition, which is a subset of visual detection and classification. However, these classification algorithms must be robust to out-of-library confusers and able to generalize across a variety of target types. In this paper, we augment the existing Synthetic and Measured Paired Labeled Experiment dataset of synthetic aperture radar images with the remainder of the public MSTAR dataset and define a set of experiments to encourage the development of traits beyond simple classification accuracy for target recognition algorithms.
Synthetic aperture radar is an all-weather sensor with many uses, including target recognition. We present our latest efforts to train a network on synthetic SAR imagery for good performance on measured images. We apply an eigenimage-based classification network to the SAMPLE dataset, a dataset of synthetic and measured SAR imagery. Eigenimages are extracted from the synthetic images, then used to encode both types of images. This encoding takes the form of a vector describing the weighted contribution of each eigenimage to a given image. This reduces the extraneous noise in the measured image and helps bridge the gap between the two domains. We train a variety of networks, including fully-connected, support vector machines, and logistic regression, on the weight vectors for synthetic images, then test on measured vectors. We present the results on the publicly available SAMPLE dataset.
Within the field of target recognition, significant attention is given to data fusion techniques to optimize decision making in systems of multiple sensors. The challenge of fusing synthetic aperture radar (SAR) and electrooptical (EO) imagery is of particular interest to the defense community due to those sensors’ prevalence in target recognition systems. In this paper, the performances of two network architectures (a simple CNN and a ResNet) are compared, each implemented with multiple fusion methods to classify SAR and EO imagery of military targets. The Synthetic and Measured Paired Labeled Experiment (SAMPLE) dataset is used, an expansion of the MSTAR dataset, using both original measured SAR data and synthetic EO data. The classification performance of both networks is compared using the data modalities individually, using feature level fusion, using decision level fusion, and using a novel fusion method based on the three RGB-input channels of the ResNet (or other CNN for color image processing). In the input channel fusion method proposed, SAR imagery is fed to one of the three input channels, and the grayscale EO data is passed to a second of the three input channels. Despite its simplicity and off-the-shelf implementation, the input channel fusion method provides strong results, indicating it is worthy of further study.
Machine learning systems are known to require large amounts of data to effectively generalize. When this data isn’t available, synthetically generated data is often used in its place. With synthetic aperture radar (SAR) imagery, the domain shift required to effectively transfer knowledge from simulated to measured imagery is non-trivial. We propose a pairing of convolutional networks (CNNs) with generative adversarial networks (GANs) to learn an effective mapping between the two domains. Classification networks are trained individually on measured and synthetic data, then a mapping between layers of the two CNNs is learned using a GAN.
Recent studies have shown that machine learning networks trained on simulated synthetic aperture radar (SAR) images of vehicular targets do not generalize well to classification of measured imagery. This disconnect between these two domains is an interesting, yet-unsolved problem. We apply an adversarial training technique to try and provide more information to a classification network about a given target. By constructing adversarial examples against synthetic data to fool the classifier, we expect to extend the network decision boundaries to include a greater operational space. These adversarial examples, in conjunction with the original synthetic data, are jointly used to train the classifier. This technique has been shown in the literature to increase network generalization in the same domain, and our hypothesis is that this will also help to generalize to the measured domain. We present a comparison of this technique to off-the-shelf convolutional classifier methods and analyze any improvement.
Machine learning techniques such as convolutional neural networks have progressed rapidly in the past few years, propelled by their rampant success in many areas. Convolutional networks work by transforming input images into compact representations that cluster well with the representations of related images. However, these representations are often not human-interpretable, which is unsatisfying. One field of research, image saliency, attempts to show where in an image a trained network is looking to obtain its information. With this method, well-trained networks will reveal a focus on the object matching the label and ignore the background or other objects. We train and test neural networks on synthetic SAR imagery and use image saliency techniques to investigate the areas of the image on which the network is focused. Doing so should reveal whether the network is using relevant information in the image, such as the shape of the target. We test various image saliency techniques and classification networks, then measure and comment on the resulting saliency results to gain insight into what the networks learn on simulated SAR data. This investigation is designed to serve as a tool for evaluating future SAR target recognition machine learning algorithms.
The publicly-available Moving and Stationary Target Acquisition and Recognition (MSTAR) synthetic aperture radar (SAR) dataset has been an valuable tool in the development of SAR automatic target recognition (ATR) algorithms over the past two decades, leading to the achievement of excellent target classification results. However, because of the large number of possible sensor parameters, target configurations and environmental conditions, the SAR operating condition (OC) space is vast. This leads to the impossible task of collecting sufficient measured data to cover the entire OC space. Thus, synthetic data must be generated to augment measured datasets. The study of synthetic data fidelity with respect to classification tasks is a non-trivial task. To that end, we introduce the Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset, which consists of SAR imagery from the MSTAR dataset and well-matched synthetic data. By matching target configurations and sensor parameters among the measured and synthetic data, the SAMPLE dataset is ideal for investigating the differences between measured and synthetic SAR imagery. In addition to the dataset, we propose four experimental designs challenging researchers to investigate the best ways to classify targets in measured SAR imagery given synthetic SAR training imagery.
Convolutional neural networks (CNN) are tremendously successful at classifying objects in electro-optical images. However, with synthetic aperture radar (SAR) data, off-the-shelf classifiers are insufficient because there are limited measured SAR data available and SAR images are not invariant to object manipulations. In this paper, we utilize the Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset to present an approach to the SAR measured and synthetic domain mismatch problem. We pre-process the synthetic and measured data using Variance-Based Joint Sparsity despeckling, quantization, and clutter transfer techniques. The t-SNE (stochastic neighborhood embedding) dimensionality reduction method is used to show that pre-processing the data in the proposed way brings the two-dimensional manifolds represented by the measured and synthetic data closer. A DenseNet classification network is trained with unprocessed and processed data, showing that when no measured data are available for training, it is beneficial to pre-process SAR data with the proposed technique.
While many aspects of the image recognition problem have been largely solved by presenting large datasets to convolutional neural networks, there is still much work to do when data is sparse. For synthetic aperture radar (SAR), there is a lack of data that stems both from the cost of collecting data as well as the small size of the community that collects and uses such data. In this case, electromagnetic simulation is an effective stopgap measure, but its effectiveness at mirroring reality is upper bounded both by the quality of the electromagnetic prediction code as well as the fidelity of the target's digital model. In practice, we find that classification models trained on synthetic data generalize poorly to measured data. In this work, we investigate three machine learning networks, with the goal of using the network to bridge the gap between measured and synthetic data. We experiment with two types of generative adversarial networks as well as a modification of a convolutional autoencoder. Each network tackles a different aspect in the problem of the disparity between measured and synthetic data, namely: generating new, realistic, labeled data; translating data between the measured and synthetic domain; and joining the manifold of the two domains into an intermediate representation. Classification results using widely-employed neural network classifiers are presented for each experiment; these results suggest that such data manipulation improves classification generalization for measured data.
In a combat environment, synthetic aperture radar (SAR) is attractive for several reasons, including automatic target recognition (ATR). In order to effectively develop ATR algorithms, data from a wide variety of targets in different configurations is necessary. Naturally, collecting all this data is expensive and time-consuming. To mitigate the cost, simulated SAR data can be used to supplement real data, but the accuracy and performance is degraded. We investigate the use of generative adversarial networks (GANs), a recent development in the field of machine learning, to make simulated data more realistic and therefore better suited to develop ATR algorithms for real-world scenarios. This class of machine learning algorithms has been shown to have good performance in image translation between image domains, making it a promising method for improving the realism of simulated SAR data. We compare the use of two different GAN architectures to perform this task. Data from the publicly available MSTAR dataset is paired with simulated data of the same targets and used to train each GAN. The resulting images are evaluated for realism and the ability to retain target class. We show the results of these tests and make recommendations for using this technique to inexpensively augment SAR data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.