We propose a framework and impact of applying Machine Learning-based generated imagery to augment data variations for firearm detection in cargo x-ray images. Deep learning-based approaches for object detection have rapidly become the state-of-art and crucial technology for non-intrusive inspection (NII) based on x-ray radiography. The technology is widely employed to reduce or replace tedious labor-intensive inspection to verify cargo content and intercept potential threats at border crossings, ports, and other critical infrastructure facilities. However, the need for variations in the threat cargo content makes accumulating training data for such a system an increasing development cost. Even though threat image projection (TIP) is widely employed to simplify the process into artificially projecting the known threat, a considerable amount of threat object appearances is still needed. To further reduce the cost, we explore the use of GenerativeAdversarial-Network (GAN) to aid dataset creation. GAN is a successful deep learning technique for generating photo-real imagery in many domains. We propose a three-stage training framework dedicated to firearm detection. First, GAN is trained to generate variations of X-ray firearm appearance from binary masks for better image quality compared to the commonly used random noise. Second, the detection training dataset is created in combinations of generated images and actual firearms using TIP. Finally, the dataset is used to train RetinaNet for the detection. Our evaluations reveal that GAN can reduce the training cost in increase detection performance as using the combination of the real and generated firearms increase performance for unseen firearms detection.
Only a few percent of indeterminate nodules found in lung CT images are cancer. However, enabling earlier diagnosis is important to avoid invasive procedures or long-time surveillance to those benign nodules. We are evaluating a classification framework using radiomics features derived with a machine learning approach from a small data set of indeterminate CT lung nodule images. We used a retrospective analysis of 194 cases with pulmonary nodules in the CT images with or without contrast enhancement from lung cancer screening clinics. The nodules were contoured by a radiologist and texture features of the lesion were calculated. In addition, sematic features describing shape were categorized. We also explored a Multiband network, a feature derivation path that uses a modified convolutional neural network (CNN) with a Triplet Network. This was trained to create discriminative feature representations useful for variable-sized nodule classification. The diagnostic accuracy was evaluated for multiple machine learning algorithms using texture, shape, and CNN features. In the CT contrast-enhanced group, the texture or semantic shape features yielded an overall diagnostic accuracy of 80%. Use of a standard deep learning network in the framework for feature derivation yielded features that substantially underperformed compared to texture and/or semantic features. However, the proposed Multiband approach of feature derivation produced results similar in diagnostic accuracy to the texture and semantic features. While the Multiband feature derivation approach did not outperform the texture and/or semantic features, its equivalent performance indicates promise for future improvements to increase diagnostic accuracy. Importantly, the Multiband approach adapts readily to different size lesions without interpolation, and performed well with relatively small amount of training data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.