Proceedings Article | 5 May 2017
Stephen Buerger, Anup Parikh, Steven Spencer, Mark Koch
KEYWORDS: Unmanned systems, Control systems, Sensors, Defense and security, Taxonomy, Data fusion, Image segmentation, Classification systems, Head, Skin, Imaging systems, Eye, RGB color model
As unmanned systems (UMS) proliferate for security and defense applications, autonomous control system capabilities that enable them to perform tactical operations are of increasing interest. These operations, in which UMS must match or exceed the performance and speed of people or manned assets, even in the presence of dynamic mission objectives and unpredictable adversary behavior, are well beyond the capability of even the most advanced control systems demonstrated to date. In this paper we deconstruct the tactical autonomy problem, identify the key technical challenges, and place them into context with the autonomy taxonomy produced by the US Department of Defense’s Autonomy Community of Interest. We argue that two key capabilities beyond the state of the art are required to enable an initial fieldable capability: rapid abstract perception in appropriate environments, and tactical reasoning. We summarize our work to date in tactical reasoning, and present initial results from a new research program focused on abstract perception in tactical environments. This approach seeks to apply semantic labels to a broad set of objects via three core thrusts. First, we use physics-based multi-sensor fusion to enable generalization from imperfect and limited training data. Second, we pursue methods to optimize sensor perspective to improve object segmentation, mapping and, ultimately, classification. Finally, we assess the potential impact of using sensors that have not traditionally been used by UMS to perceive their environment, for example hyperspectral imagers, on the ability to identify objects. Our technical approach and initial results are presented.