A critical limitation in the application of deep learning to radar signal classification is the lack of sufficient data to train very deep neural networks. The depth of a neural network is one of the more significant network parameters that affects achievable classification accuracy. One way to overcome this challenge is to generate synthetic samples for training deep neural networks (DNNs). In prior work of the authors, two methods have been developed: 1) diversified micro-Doppler signature generation via transformations of the underlying skeletal model derived from video motion capture (MOCAP) data, and 2) auxiliary conditional generative adversarial networks (ACGANs) with kinematic sifting. While diversified MOCAP has the advantage of greater accuracy in generating signatures that span to the probable target space of expected human motion for different body sizes, speeds, and individualized gait, the method cannot capture data artifacts due to sensor imperfections or clutter. In contrast, adversarial learning has been shown to be able to capture non-target related artifacts, however, the ACGANs can also generate misleading signatures that are kinematically impossible. This paper provides an in-depth performance comparison of the two methods on a through-the-wall radar data set of human activities of daily living (ADL) in the presence of clutter and sensor artifacts.
|