To function at the same operational tempo as human teammates on the battlefield in a robust and resilient manner, autonomous systems must assess and manage risk as it pertains to vehicle navigation. Risk comes in multiple forms, associated with both specific and uncertain terrains, environmental conditions, and nearby actors. In this work, we present a risk-aware path planning method to handle the first form, incorporating perception uncertainty over terrain types to trade-off between exploration and exploitation behaviors. The uncertainty from machine learned terrain segmentation models is used to generate a layered terrain map that associates every grid cell with its label uncertainty among the semantic classes. The risk term increases when differently traversable semantic classes (e.g., tree and grass) are associated with the same cell. We show that adjusting risk tolerances allows the planner to recognize and generate paths through materials like tall grass that historically have been ruled out when only considering geometry. Utilizing a risk-aware planner allows triggering an exploratory behavior to gather more information to minimize uncertainty over terrain categorizations. Most existing methods for incorporating risk will avoid regions of uncertainty, whereas here the vehicle can determine if the risk is too high after new observation/investigation. This also allows the autonomous system to decide to ask a human teammate for help to reduce uncertainty and make progress towards goal. We demonstrate the approach on a ground robot in simulation and in real world for autonomously navigating through a wooded environment.
In machine learning, backdoor or trojan attacks during model training can cause the targeted model to deceptively learn to misclassify in the presence of specific triggers. This mechanism of deception enables the attacker to exercise full control on when the model behavior becomes malicious through use of a trigger. In this paper, we introduce Epistemic Classifiers as a new category of defense mechanism and show their effectiveness in detecting backdoor attacks, which can be used to trigger default mechanisms, or solicit human intervention, on occasions where an untrustworthy model prediction can adversely impact the system within which it operates. We show experimental results with multiple public datasets and explain the reasons with visualization for effectiveness of the proposed approach. This empowers the war fighter to trust the AI on the tactical edge to be reliable and to become sensitive to scenarios with deception and noise where reliability cannot be provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.