An open research question is how to best pair a human and agent (e.g., AI, autonomous) relative to a complex, multi-objective task in a dynamic and unknown partially observable environment. At the heart of this challenge resides even deeper questions like what AI is needed and how can bi-directional and multi-directional human-robot trust be established. In this paper, the theoretical framework for a simple 2D grid world-based cooperative search and rescue game is explored. The resultant prototype interface enables the study of human-robot interaction for human-robot teaming. First, the design and implementation of a prototype interface is discussed. A 2D grid-world was selected to simplify the investigation and eliminate confounding factors that arise in more complicated simulated 3D and real world experiments. Next, different types of autonomous agents are introduced, as they impact our studies and ultimately are an integral element of the underlying research question. This is followed by three levels of increasing complexity open-ended games, easy, medium, and hard. The current paper does not contain human experimentation results. That is the next step in this research. Instead, this article introduces, explains, and defends a set of design choices and working examples are provided to facilitate open discussion.
The Sensor Analysis and Intelligence Laboratory (SAIL) at Mississippi State University's (MSU's) Center for Advanced Vehicular Systems (CAVS) and the Social, Therapeutic and Robotic Systems Lab (STaRS) at MSU's Computer Science and Engineering department have designed and implemented a modular platform for automated sensor data collection and processing, named the Hydra. The Hydra is an open-source system (all artifacts and code are published to the research community), and it consists of a modular rigid mounting platform (sensors, processors, power supply and conditioning) that utilize the Picatinny rail (a standardized mounting system originally developed for firearms) as a rigid mounting system, a software platform utilizing the Robotic Operating System (ROS) for data collection, and design packages (schematics, CAD drawings, etc.). The Hydra system streamlines the assembly of a configurable multi-sensor system. This system is motivated to enable researchers to quickly select sensors, assemble them as an integrated system, and collect data (without having to recreate the Hydras hardware and software). Prototype results are presented from a recent data collection on a small robot during a SWAT-robot training.
Autonomous unmanned ground vehicles (UGVs) are beginning to play a more critical role in military operations. As the size of the fighting forces continues to draw down, the U.S. and coalition partner Armed Forces will become increasingly reliant on UGVs to perform mission-critical roles. These roles range from squad-level manned-unmanned teaming to large-scale autonomous convoy operations. However, as more UGVs with increasing levels of autonomy are entering the field, tools for accurately predicting these UGVs performance and capabilities are lacking. In particular, the mobility of autonomous UGVs is a largely unsolved problem. While legacy tools for predicting ground vehicle mobility are available for both assessing performance and planning operations, in particular the NATO Reference Mobility Model, no such toolset exists for autonomous UGVs. Once autonomy comes into play, ground vehicle mechanical-mobility is no longer enough to characterize vehicle mobility performance. Not only will vehicle-terrain interactions and driver concerns impact mobility, but sensor-environment interactions will also affect mobility. UGV mobility will depend in a large part on the sensor data available to drive the UGVs autonomy algorithms. A limited amount of research has been focused on the concept of perception-based mobility to date. To that end, the presented work will provide a review of the tools and methods developed thus far for modeling, simulating, and assessing autonomous mobility for UGVs. This review will highlight both the modifications being made to current mobility modeling tools and new tools in development specifically for autonomous mobility modeling. In light of this review, areas of current need will also be highlighted, and recommended steps forward will be proposed.
Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational
intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is
finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest
in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop
signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the
GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of
this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for
the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential
velocity decay function to the PSO, the addition of an ”execution best” omnipresent particle to the PSO, and the addition
of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using
various combinations of selection, crossover, and mutation operators and experimentation was also performed with the
PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence
scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and
drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful
problem specific parameter sets.
Thermal-infrared cameras are used for signal/image processing and computer vision in numerous military and civilian applications.
However, the cost of high quality (e.g., low noise, accurate temperature measurement, etc.) and high resolution
thermal sensors is often a limiting factor. On the other hand, high resolution visual spectrum cameras are readily available
and typically inexpensive. Herein, we outline a way to upsample thermal imagery with respect to a high resolution visual
spectrum camera using Markov random field theory. This paper also explores the tradeoffs and impact of upsampling,
both qualitatively and quantitatively. Our preliminary results demonstrate the successful use of this approach for human
detection and accurate propagation of thermal measurements in an image for more general tasks like scene understanding.
A tradeoff analysis of the cost-to-performance as the resolution of the thermal camera decreases is provided.
A vision system was designed for people detection to provide support to SWAT team members operating in challenging environments such as low-to-no light, smoke, etc. When the vision system is mounted on a mobile robot platform: it will enable the robot to function as an effective member of the SWAT team; to provide surveillance information; to make first contact with suspects; and provide safe entry for team members. The vision task is challenging because SWAT team members are typically concealed, carry various equipment such as shields, and perform tactical and stealthy maneuvers. Occlusion is a particular challenge because team members operate in close proximity to one another. An uncooled electro-opticaljlong wav e infrared (EO/ LWIR) camera,
7.5 to 13.5 m, was used. A unique thermal dataset was collected of SWAT team members from multiple teams
performing tactical maneuvers during monthly training exercises. Our approach consisted of two stages: an object detector trained on people to find candidate windows, and a secondary feature extraction, multi-kernel (MK) aggregation and classification step to distinguish between SWAT team members and civilians. Two types of thermal features, local and global, are presented based on ma ximally stable extremal region (MSER) blob detection. Support vector machine (SVM) classification results of approximately [70, 93]% for SWAT team member detection are reported based on the exploration of different combinations of visual information in terms of training data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.