In today’s battlefield environments, analysts are inundated with real-time data received from the tactical
edge that must be evaluated and used for managing and modifying current missions as well as planning
for future missions. This paper describes a framework that facilitates a Value of Information (VoI) based
data analytics tool for information object (IO) analysis in a tactical and command and control (C2)
environment, which reduces analyst work load by providing automated or analyst assisted applications. It
allows the analyst to adjust parameters for data matching of the IOs that will be received and provides
agents for further filtering or fusing of the incoming data. It allows for analyst enhancement and markup
to be made to and/or comments to be attached to the incoming IOs, which can then be re-disseminated
utilizing the VoI based dissemination service. The analyst may also adjust the underlying parameters
before re-dissemination of an IO, which will subsequently adjust the value of the IO based on this
new/additional information that has been added, possibly increasing the value from the original. The
framework is flexible and extendable, providing an easy to use, dynamically changing Command and
Control decision aid that focuses and enhances the analyst workflow.
KEYWORDS: Information operations, Sensors, Detection and tracking algorithms, Data acquisition, Databases, Data storage, Sensor networks, Dismounted soldiers, Statistical analysis, Data centers
Recent technological advances in the areas of sensors, computation, and storage have led to the development of relatively inexpensive sensors that have been deployed on a wide scale and are able to generate large volumes of data. However, tactical networks have not been able to keep pace in terms of their ability to transfer all of the sensor data from the edge to an operations center for analysis. This paper explores multiple techniques to help bridge this gap, by using a three-pronged approach based on value of information-based dissemination, active sensor query capabilities, and anomaly detection mechanisms. These capabilities are being integrated into an open-source sensor platform deployed in a testbed environment for evaluation purposes.
Currently, the 3000+ robotic systems fielded in theater are entirely teleoperated. This constant dependence on operator
control introduces several problems, including a large cognitive load on the operator and a limited ability for the operator
to maintain an appropriate level of situational awareness of his surroundings. One solution to reduce the dependence on
teleoperation is to develop autonomous behaviors for the robot to reduce the strain on the operator.
We consider mapping and navigation to be fundamental to the development of useful field autonomy for small
unmanned ground vehicles (SUGVs). To this end, we have developed baseline autonomous capabilities for our SUGV
platforms, making use of the open-source Robot Operating System (ROS) software from Willow Garage, Inc. Their
implementations of mapping and navigation are drawn from the most successful published academic algorithms in
robotics.
In this paper, we describe how we bridged our previous work with the Packbot Explorer to incorporate a new processing
payload, new sensors, and the ROS system configured to perform the high-level autonomy tasks of mapping and
waypoint navigation. We document our most successful parameter selection for the ROS navigation software in an
indoor environment and present results of a mapping experiment.
There exists a current need to rapidly and accurately identify the presence and location of optical imaging devices used
in counter-surveillance activities against U. S. troops deployed abroad. The locations of devices employed in counter-surveillance
activities can be identified through detection of the optically augmented reflection from these devices. To
address this need, we have developed a novel optical augmentation sensor, the Mobile Optical Detection System
(MODS), which is uniquely designed to identify the presence of optical systems of interest. The essential components of
the sensor are three, spectrally diverse diode lasers (1 ultraviolet/2 near-infrared) which are integrated to produce a single
multi-wavelength interrogation beam and a charge-coupled-device (CCD) receiver which is used to detect the
retroreflected, optical beam returned from a target of interest. The multi-spectral diode laser illuminator and digital
receiver are configured in a pseudo-monostatic arrangement and are controlled through a customized computer interface.
By comparison, MODS is unique among OA sensors since it employs a collection of wavelength-diverse, continuous-wave
(CW) diode laser sources which facilitate the identification of optical imaging devices used for counter-surveillance
activities. In addition, digital image processing techniques are leveraged to facilitate improved clutter
rejection concomitant with highly-specific target location (e.g., azimuth and elevation). More, the digital output format
makes the sensor amenable to a wide range of interface options including computer networks, eyepieces and remotely-located
displays linked through wireless nodes.
The Army Research Laboratory (ARL) is researching a short-range ladar imager for small unmanned ground vehicles for
navigation, obstacle/collision avoidance, and target detection and identification. To date, commercial ladars for this
application have been flawed by one or more factors including, low pixelization, insufficient range or range resolution,
image artifacts, no daylight operation, large size, high power consumption, and high cost. In the prior year we conceived
a scanned ladar design based on a newly developed but commercial MEMS mirror and a pulsed Erbium fiber laser. We
initiated construction, and performed in-lab tests that validated the basic ladar architecture. This year we improved the
transmitter and receiver modules and successfully tested a new
low-cost and compact Erbium laser candidate. We further
developed the existing software to allow adjustment of operating parameters on-the-fly and display of the imaged data in
real-time. For our most significant achievement we mounted the ladar on an iRobot PackBot and wrote software to
integrate PackBot and ladar control signals and ladar imagery on the PackBot's computer network. We recently remotely
drove the PackBot over an inlab obstacle course while displaying the ladar data real-time over a wireless link. The ladar
has a 5-6 Hz frame rate, an image size of 256 (h) × 128 (v) pixels, a 60° x 30° field of regard, 20 m range, eyesafe
operation, and 40 cm range resolution (with provisions for super-resolution or accuracy). This paper will describe the
ladar design and update progress in its development and performance.
The U.S. Army Research Laboratory's (ARL) Computational and Information Sciences Directorate (CISD) has long
been involved in autonomous asset control, specifically as it relates to small robots. Over the past year, CISD has been
making strides in the implementation of three areas of small robot autonomy, namely platform autonomy, Soldier-robot
interface, and tactical behaviors. It is CISD's belief that these three areas must be considered as a whole in order to
provide Soldiers with useful capabilities.
In addressing these areas, CISD has integrated a COTS LADAR into the head of an iRobot PackBot Explorer, providing
ranging information with minimal disruption to the physical characteristics of the platform. Using this range data is an
implementation of obstacle detection and avoidance (OD/OA), leveraged from an existing autonomy software suite,
running on the platform's native processor. These capabilities will serve as the foundation of our targeted behaviorbased
control methodologies. The first behavior is guarded tele-operation that augments the existing ARL robotic
control infrastructure. The second is the implementation of a multi-robot cooperative mapping behavior. Developed at
ARL, collaborative simultaneous localization and mapping (CSLAM) will allow multiple robots to build a common map
of an area, providing the Soldier operator with a singular view of that area.
This paper will describe the hardware and software integration of the LADAR sensor into the ARL robotic control
system. Further, the paper will discuss the implementation of the small robot OD/OA and CSLAM software components
performed by ARL, as well as results on their performance and benefits to the Soldier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.