The Label-Diffusion-LIDAR-Segmentation (LDLS) algorithm uses multi-modal data for enhanced inference of environmental categories. The algorithm segments the Red-Green-Blue (RGB) channels and maps the results to the LIDAR point cloud using matrix calculations to reduce noise. Recent research has developed custom optimization techniques using quantization to accelerate the 3D object detection using LDLS in robotic systems. These optimizations achieve a 3x speedup over the original algorithm, making it possible to deploy the algorithm in real-world applications. The optimizations include quantization for the segmentation inference as well as matrix optimizations for the label diffusion. We will present our results, compare them with the baseline, and discuss their significance in achieving real-time object detection in resource-constrained environments.
We optimized and deployed the adaptive framework Virtuoso that can maintain real-time object detection even when experiencing high contention scenarios. The original Virtuoso framework uses an adaptive algorithm for the detection frame followed by a low-cost algorithm for the tracker frame which uses down-sampled images to reduce computation. One of our optimizations include detaching the single synchronous thread for detection and tracking into two parallel threads. This multi-threaded implementation allows for computationally high-cost detection algorithms to be used while still maintaining real-time output from the tracker thread. Another optimization we developed uses multiple down-sampled images to initialize each tracker based on the size of the input box; the multiple down-sampled images allow each tracker to choose the optimal image size for the box that it is tracking rather than a single down-sampled image being used for all trackers.
One needs a good communication cost model [1, 2] for optimizing the off-loaded computation in a tactical environment. Recently, we presented a mathematical cost model protocol for optimizing those computations [3]. It applies to the Autonomous Mobile Agents (AMA) in the field communicating via a resource-constrained multi-node tactical network. In the present work, we will include the delay and recast the situation as a Linear Programming optimization problem.
Supervised machine learning depends on training a model to mimic previous labeled results. The problem with a small dataset is that data augmentation is necessary to increase the generalization of the model to future images, but we have observed that future images won’t necessarily be in the same domain as the augmented images. To alleviate this problem we applied image segmentation multiple times on the same image by using the same data augmentation techniques on the image in question, and then we merged the results using a priority based on the class weights used when training the model. Merging the segmentation results from the augmented images increased the mean-intersection-over-union over the inference results that used a single image.
The proposed transitory cross entropy loss function performs a weighted average of the cross entropy using both the truth labels and the predicted labels; this is a variation of the weighted cross entropy loss function that performs a weighted average using just the truth labels. We tested the transitory cross entropy loss function by training ICNet on the CityScapes dataset and saw an increase in the mean-intersection-over-union relative to the model trained using the standard weighted cross entropy loss function. We further propose modifying the weights based on dynamic performance metrics rather than just static distribution metrics.
With the advent of neural networks, users at the tactical edge have started experimenting with AI enabled intelligent mission applications. Autonomy stacks have been proposed for the tactical environments for sensing, reasoning and computing the situational awareness to provide the human in the loop actionable intelligence in mission time. Tactical edge computing platforms must employ small-form-factor modules for compute, storage, and networking functions that conform to strict size, weight, and power constraints (SWaP). Many of the neural network models proposed for the tactical AI stack are computationally complex and may not be deployable without modifications. In this paper we discuss deep neural network optimization approaches for resource constrained tactical unmanned ground vehicles.
Normalizations used in model reduction can be chosen to emphasize anything from computation reduction to parameter reduction. Choosing a normalization that emphasizes a model with a small number of parameters is useful when deploying a model onto machines with a limited communication rate, while choosing a normalization that emphasizes a model with a small computational cost is useful when deploying a model onto a machine for real-time sensor analysis. As such, we explore the effect of various normalizations used to prune kernel parameters on models trained on the ImageNet database.
Here we compare the latency performance of active and passive offloading decisions pathways which we use with our adaptive computing engine. The passive pathway will query nodes for offloading after a job has been submitted, while the active pathway will query system information before jobs are submitted to create an offloading plan. The offloading plan reduces latency and communication overhead by cutting out the offloading query required for each job under the passive pathway. Overall we see a 6ms reduction in latency when using the active pathway in a multi-node environment connected over Wi-Fi.
Classical optimization algorithms in machine learning often take a long time to compute when applied to a multi-dimensional problem and require a huge amount of CPU and GPU resource. Quantum parallelism has a potential to speed up machine learning algorithms. We describe a generic mathematical model to leverage quantum parallelism to speed-up machine learning algorithms. We also apply quantum machine learning and quantum parallelism to a 3-dimensional image that vary with time as well as tracking speed in object identification.
One needs a good communication cost model to understand how best to optimize the off-loaded computation in a tactical environment. In practical scenarios, complications also arise from the in-band and out-of-band channel congestion interference. This can happen due to the intentional and/or unintentional adversary equipment and will affect both the nature of algorithms and the sequence of steps in their execution. As a first step towards solution of the above problems, we present a model for a multi-node tactical network with resource constraints with and without presence of some adversary nodes.
We present and evaluate the performance of our Network Link Outlier Factor (NLOF) for detecting transmission channel faults in communication networks. An NLOF is computed for each transmission channel in a network under management using the throughput values derived from ow data. Throughput values of flows are clustered in two stages, outlier values are determined within each of the clusters, and then ow outlier ratios determine the outlier score for each transmission channel (link). Specifically, we first cluster the throughput of flows into the set of clusters we believe will naturally exist in a network and then identify the outliers within those throughput clusters. Our technique to detect network transmission channel faults consists of: 1) ow throughput clustering, 2) ow throughput outlier detection using an outlier score, 3) tracing flows on the network topology using routing information, and 4) network link outlier score computation from ow outlier scores.
In this paper, we evaluate the performance of several flow features to classify the network application that produced the flow. Correlating network traffic to network applications can assist with the critical network management tasks of performance assessment and network utilization accounting. Specifically, in this work we evaluate three engineered flow features and three inherent flow features (number of bytes, number of packets, and duration). For engineered features, we evaluate three host communication behavior features proposed by the authors of BLINC. Our experiments uncover the classification power of all combinations of the three engineered features in conjunction with the three inherent features. We utilize supervised machine learning algorithms such as k-nearest neighbors and decision trees. We utilize confidence intervals to uncover statistically significant classification differences among the combinations of flow features.
During multi-robot simultaneous localization and mapping (SLAM) tasks, a team of robot agents must efficiently synthesize a global map from individual local maps. One method to accomplish this is reserving a subset of robots to perform data fusion and assigning the others to collect and generate local maps. While this solves, the division of labor problem, it requires humans to explicitly designate which robots handle what tasks and it does not scale well to large teams. Moreover, when robots are operating in tactical environments, they may be placed on a heterogeneous team, with limited or unreliable communication infrastructure. To assist with the task of role assignment, we describe a novel decentralized message passing algorithm, for what we are calling Distributed Leader Consensus (DLC). DLC helps a set of agents self-organize into structured groups by giving them the ability to autonomously come to a consensus on the group leader. Our approach is entirely distributed, easily configurable, and is robust to agents being dynamically added to or removed from the system. DLC may be configured to limit group sizes, assign multiple leaders, and select leaders based on computational power and/or physical proximity. We test our approach in simulation by having a set of agents adapt to changing teammate availability.
KEYWORDS: Visualization, Visual analytics, 3D modeling, Virtual reality, Data processing, 3D displays, Data analysis, Scientific visualization, Displays, Human-machine interfaces
Advancement in the areas of high performance computing and computational sciences have facilitated the generation of an enormous amount of research data by computational scientists - the volume, velocity and variability of Big 'Research' Data has increased across all disciplines. An immersive and non-immersive analytics platform capable of handling extreme-scale scientific data will enable scientists to visualize unwieldy simulation data in an intuitive manner and guide the development of sophisticated and targeted analytics to obtain usable information. Our immersive and non-immersive visualization work is an attempt to provide computational scientists with the ability to analyze the extreme-scale data generated. The main purpose of this paper is to identify different characteristics of a scientific data analysis process to provide a general outline for the scientists to select the appropriate visualization systems to perform their data analytics. In addition, we will include some of the details on how to how the immersive and non-immersive visualization hardware and software are setup. We are confident that the findings in our paper will provide scientists with a streamlined and optimal visual analytics workflow.
We present and evaluate the idea of auto-generating training data for network application classification using a rule-based expert system on two-dimensions of the feature space. That training data is then used to learn classification of network applications using other dimensions of the feature space. The rule-based expert system uses transport layer port number conventions (source port, destination port) from the Internet Assigned Numbers Authority (IANA) to classify applications to create the labeled training data. A classifier can then be trained on other network ow features using this auto-generated training data. We evaluate this approach to network application classification and report our findings. We explore the use of the following classifiers: K-nearest neighbors, decision trees, and random forests. Lastly, our approach uses data solely at the ow-level (in NetFlow v5 records) thereby limiting the volume of data that must be collected and/or stored.
Edge computing is emerging as a new paradigm to allow processing data near the edge of the network, where the data is typically generated and collected. This enables critical computations at the tactical edge in applications such as Internet of Battlefield Things (IoBT), in which an increasing number of devices (sensors, cameras, health monitoring devices, etc.) collect data that needs to be processed through computationally intensive algorithms with stringent reliability, security and latency constraints. Our key tool is the theory of coded computation, which advocates mixing data in computationally intensive tasks by employing erasure codes and offloading these tasks to other devices for computation. Coded computation is recently gaining interest, thanks to its higher reliability, smaller delay, and lower communication costs. In this paper, we develop a private and rateless adaptive coded computation (PRAC) algorithm by taking into account (i) the privacy requirements of IoBT applications and devices, and (ii) the heterogeneous and time-varying resources of edge devices. We show that PRAC outperforms known secure coded computing methods when resources are heterogeneous. We provide theoretical guarantees on the performance of PRAC and its comparison to baselines. Moreover, we confirm our theoretical results through simulations.
OpenTap is a network traffic analysis tool tailored to remotely capture statistics from an Openflow switch. We have modified OpenTap for low latency packet capture of metadata which can be used for remote security analysis. We demonstrate the utility of this tool by capturing metadata from a quantum information testbed. Specifically, the testbed transmits photon data to a desktop using packets which OpenTap captures. We used a servo motor to periodically block the beam to create 20ms dips in the photon intensity, and then measured this intensity using two methods: via the metadata captured by OpenTap, and via the photon timestamps gathered by the quantum information testbed. Every photon intensity dip detected by the quantum information testbed was also detected using just the metadata; this indicates OpenTap can be used for remote real-time security analysis using packet metadata.
In a resource-constrained, contested environment, computing resources need to be aware of possible size, weight, and power (SWaP) restrictions. SWaP-aware computational efficiency depends upon optimization of computational resources and intelligent time versus efficiency tradeoffs in decision making. In this paper we address the complexity of various optimization strategies related to low SWaP computing. Due to these restrictions, only a small subset of less complicated and fast computable algorithms can be used for tactical, adaptive computing.
Quantum applications transmit and receive data through quantum and classical communication channels. Channel capacity, the distance and the photon path between transmitting and receiving parties and the speed of the computation links play an essential role in timely synchronization and delivery of information using classical and quantum channels. In this study, we analyze and optimize the parameters of the communication channels needed for the quantum application to successfully operate. We also develop algorithms for synchronizing data delivery on classical and quantum channels.
Well-defined and stable quantum networks are essential to realize functional quantum communication applications. In particular, the quantum states must be precisely controlled to produce meaningful results. To counteract the unstable phase shifts in photonic systems, we apply local Bell state measurements to calibrate a non-local quantum channel. The calibration procedure is tested by applying a time encoded quantum key distribution procedure using entangled photons.
We present OpenTap, a unified interface designed as an Infrastructure layer technology for a software-defined network measurement (SDNM) stack. OpenTap provides invocations for remotely capturing network data at various granularities, such as packet or NetFlow. OpenTap drivers can be developed that leverage open source network measurement tools such as tcpdump and nfdump. OpenTap software can be used to turn any computing device with network interfaces into a remotely controlled network data collection device. Although OpenTap was designed for SDNM, its interface generalizes to any data acquisition thereby providing software-defined data acquisition (SDDA). We illustrate this generality with OpenTap drivers that leverage Phidgets USB sensors to remotely capture environmental data such as temperature. We have completed an implementation of OpenTap that uses a REST API for the invocations. Using that implementation, we study a few use cases of OpenTap for automated network management and network traffic visualizations to characterize its utility for those applications. We find that OpenTap empowers rapid development of software for more complex network measurement functionality at the Control layer such as, joining network data with other sources, and creating network data aggregates such as traffic matrices. OpenTap significantly lowers the cost and development barrier to large-scale data acquisition thereby bringing data acquisition and analytics to an unprecedented number of users. Finally, at the Application layer, network measurement applications such as traffic matrix visualizations are easily implemented leveraging OpenTap at the Infrastructure layer in addition to the Control layer. All of these data processing software systems will be open source and available on GitHub by the time of the conference.
Optimized Quantum Key Distribution (QKD) protocols revolutionize the cyber security by leveraging the quantum phenomenon for development of unbreakable security. Configurable quantum networks are necessary for accessible quantum applications amongst multiple users. Quantum key distribution is particularly interesting because of the many ways in which the key exchange can be carried out. Keys can be exchanged by encoding the key into a weak photon source using classical methods, or the keys can be exchanged using pairs of photons entangled at the source, or the keys can even be exchanged by encoding with classical hardware at the source with an entangling measurement which occurs at the photons destination. Each type of quantum key exchange has its own requirements that must be met for point-to-point implementations which makes it exceedingly difficult to implement multi-node quantum networks. We propose a programmable network model to time encoded quantum key distribution; this version of QKD sends entangled photons to two users and the hardware is setup such that the relative time shift in the coincident photons encodes which measurement basis was used. The protocols were first simulated by modifying previous software which used the CHP quantum simulator, and then a point-to-point key exchange was setup in hardware to demonstrate the time encoding aspects of the protocol.
Major advancements in computational and sensor hardware have enormously facilitated the generation and collection of research data by scientists - the volume, velocity and variety of Big ’Research’ Data has increased across all disciplines. A visual analytics platform capable of handling extreme-scale data will enable scientists to visualize unwieldy data in an intuitive manner and guide the development of sophisticated and targeted analytics to obtain useable information. Reconfigurable Visual Computing Architecture is an attempt to provide scientists with the ability to analyze the extreme-scale data collected. Reconfigurable Visual Computing Architecture requires the research and development of new interdisciplinary technological tools that integrate data, realtime predictive analytics, visualization, and acceleration on heterogeneous computing platforms. Reconfigurable Visual Computing Architecture will provide scientists with a streamlined visual analytics tool.
Well-defined and stable quantum networks are essential to realize functional quantum communication applications. Quantum networks are complex and must use both quantum and classical channels to support quantum applications like QKD, teleportation, and superdense coding. In particular, the no-cloning theorem prevents the reliable copying of quantum signals such that the quantum and classical channels must be highly coordinated using robust and extensible methods. In this paper, we describe new network abstractions and interfaces for building programmable quantum networks. Our approach leverages new OpenFlow data structures and table type patterns to build programmable quantum networks and to support quantum applications.
Software-defined networking offers a device-agnostic programmable framework to encode new network functions. Externally centralized control plane intelligence allows programmers to write network applications and to build functional network designs. OpenFlow is a key protocol widely adopted to build programmable networks because of its programmability, flexibility and ability to interconnect heterogeneous network devices. We simulate the functional topology of a multi-node quantum network that uses programmable network principles to manage quantum metadata for protocols such as teleportation, superdense coding, and quantum key distribution. We first show how the OpenFlow protocol can manage the quantum metadata needed to control the quantum channel. We then use numerical simulation to demonstrate robust programmability of a quantum switch via the OpenFlow network controller while executing an application of superdense coding. We describe the software framework implemented to carry out these simulations and we discuss near-term efforts to realize these applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.