Deep neural networks demonstrate high performance at classifying high-dimensional signals, but often fail to generalize to data that is different from the data they were trained on. In this paper, we investigate the resilience of convolutional neural networks (CNNs) to unforeseen operating conditions. Specifically, we empirically evaluate the ability of CNN models to generalize across changes in image contrast. Multiple models are trained on electro- optical (EO) or near-infrared (IR) data, and are evaluated in environments with degraded contrast compared to training. Experiments are replicated across varying architectures, including state-of-the-art classification models such as Resnet-152, and across both synthetic and measured datasets. In comparison to models trained and evaluated on identically-distributed data, these models can generalize well when contrast invariance is built up through data augmentation. Future work will investigate CNN ability to generalize to other changes in operating conditions.
|