In manned-unmanned teaming scenarios, autonomous unmanned robotic platforms with advanced sensing and compute capabilities will have the ability to perform online change detection. This change detection will consist of metric comparisons of sensor-based spatial information with information collected previously, for the purpose of identifying changes in the environment that could indicate anything from adversarial activity to changes caused by natural phenomena that could affect the mission. This previously collected information will be sourced from a variety of sources, such as satellite, IoT devices, other manned-unmanned teams, or the same robotic platform on a prior mission. While these robotic platforms will be superior to their human operators at detecting changes, the human teammates will for the foreseeable future exceed the abilities of autonomy at interpreting any changes, particularly for relevance to the mission and situational context. For this reason, the ability of a robot to intelligently and properly convey such information to maximize human understanding is essential. In this work, we build upon previous work which presented a mixed reality interface for conveying change detection information from an autonomous robot to a human. We discuss factors affecting human understanding of augmented reality visualization of detected changes, based upon multiple user studies where a user interacts with this system. We believe our findings will be informative to the creation of AR-based communication strategies for manned-unmanned teams performing multi-domain operations.
Robots, equipped with powerful modern sensors and perception algorithms, have enormous potential to use what they perceive to provide enhanced situational awareness to their human teammates. One such type of information is changes that the robot detects in the environment that have occurred since a previous observation. A major challenge for sharing this information from the robot to the human is the interface. This includes how to properly aggregate change detection data, present it succinctly for the human to interpret, and allow the human to interact with the detected changes, e.g., to label, discard, or even to task the robot to investigate, for the purposes of enhanced situational awareness and decision making. In this work we address this challenge through the design of an augmented reality interface for aggregating, displaying, and interacting with changes detected by an autonomous robot teammate. We believe the outcomes of this work could have significant applications to Soldiers interacting with any type of high-volume, autonomously-generated information in Multi-Domain Operations.
Collaborative multi-sensor perception enables a sensor network to provide multiple views or observations of an environment, in a way that collects multiple observations into a cohesive display. In order to do this, multiple observations must be intelligently fused. We briefly describe our existing approach for sensor fusion and selection, where a weighted combination of observations is used to recognize a target object. The optimal weights that are identified control the fusion of multiple sensors, while also selecting those which provide the most relevant or informative observations. In this paper, we propose a system which utilizes these optimal sensor fusion weights to control the display of observations to a human operator, providing enhanced situational awareness. Our proposed system displays observations based on the physical locations of the sensors, enabling a human operator to better understand where observations are located in the environment. Then, the optimal sensor fusion weights are used to scale the display of observations, highlighting those which are informative and making less relevant observations simple for a human operator to ignore.
Team communication is crucial in multi-domain operations (MDOs) that require teammates to collaborate on complex tasks synchronously in dynamic unknown environments. In order to enable effective communication in human-robot teams, the human teammate must have an intuitive interface that supports and satisfies the time-sensitive nature of the task for communicating information to and from their robot teammate. Augmented Reality (AR) technologies can provide just such an interface by providing a medium for both active and passive robot communication. In this paper we propose a new Virtual Reality (VR) based framework for authoring AR visualizations, and demonstrate the use of this framework to produce AR visualizations that help facilitate high task performance in synchronized, time-dominant human-robot teaming. In this paper we propose a new framework that uses a virtual reality (VR) simulation environment for developing AR strategies as well as present a AR solution for maximizing task performance in synchronized, time-dominant human-robot teaming. The framework utilizes a Unity-based VR simulator that is run from the first person point of view of the human teammate and overlays AR features to virtually imitate the use of an AR headset in human-robot teaming scenarios. Then, we introduce novel AR visualizations that support strategic communication within teams by collecting information from each teammate and presenting it to the other in order to influence their decision making. Our proposed design framework and AR solution has the potential to impact any domain in which humans conduct synchronized multi-domain operations alongside autonomous robots in austere environments, including search and rescue, environmental monitoring, and homeland defense.
One of the most significant challenges for the emerging operational environment addressed by Multi-Domain Operations (MDO) is the exchange of information between personnel in operating environments. Making in- formation available for leveraging at the appropriate echelon is essential for convergence, a key tenet of MDO. Emergent cross-reality (XR) technologies are poised to have a significant impact on the convergence of the in- formation environment. These powerful technologies present an opportunity to not only enhance the situational awareness of individuals at the local" tactical edge and the decision-maker at the global" mission command (C2), but to intensely and intricately bridge the information exchanged across all echelons. Complimentarily, the increasing use of autonomy in MDO, from autonomous robotic agents in the field to decision-making assistance for C2 operations, also holds great promise for human-autonomy teaming to improve performance at all echelon levels. Traditional research examines, at most, a small subset of these problems. Here, we envision a system that sees human-robot teams operating at the local edge communicating with human-autonomy teams at the global operations level. Both teams use a mixed reality (MR) system for visualization and interaction with a common operating picture (COP) to enhance situational awareness, sensing, and communication { but with highly different purposes and considerations. By creating a system that bridges across echelons, we are able to examine these considerations to determine their impact on information shared bi-directionally, between the global (C2) and local (tactical) levels, in order to understand and improve autonomous agents teamed with humans at both levels. We present a prototype system that includes an autonomous robot operating with a human teammate sharing sensory data and action plans with, and receiving commands and intelligence information from, a tactical operations team commanding from a remote location. We examine the challenges and considerations in creating such a system, and present initial findings.
Conference Committee Involvement (3)
Virtual, Augmented, and Mixed Reality (XR) Technology for Multi-Domain Operations III
4 April 2022 | Orlando, Florida, United States
Virtual, Augmented, and Mixed Reality (XR) Technology for Multi-Domain Operations II
12 April 2021 | Online Only, Florida, United States
Virtual, Augmented, and Mixed Reality (XR) Technology for Multi-Domain Operations
27 April 2020 | Online Only, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.