Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Adversarial Sampling-Based Motion Planning

Journal Article · · IEEE Robotics and Automation Letters
 [1];  [2];  [1];  [3];  [4];  [1]
  1. Georgia Institute of Technology, Atlanta, GA (United States)
  2. Univ. of Hawaii, Hilo, HI (United States)
  3. Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
  4. Univ. of Washington, Seattle, WA (United States)
In this report there are many scenarios in which a mobile agent may not want its path to be predictable. Examples include preserving privacy or confusing an adversary. However, this desire for deception can conflict with the need for a low path cost. Optimal plans such as those produced by RRT* may have low path cost, but their optimality makes them predictable. Similarly, a deceptive path that features numerous zig-zags may take too long to reach the goal. We address this trade-off by drawing inspiration from adversarial machine learning. We propose a new planning algorithm, which we title Adversarial RRT*. Adversarial RRT* attempts to deceive machine learning classifiers by incorporating a predicted measure of deception into the planner cost function. Adversarial RRT* considers both path cost and a measure of predicted deceptiveness in order to produce a trajectory with low path cost that still has deceptive properties. We demonstrate the performance of Adversarial RRT*, with two measures of deception, using a simulated Dubins vehicle. We show how Adversarial RRT* can decrease cumulative RNN accuracy across paths to 10%, compared to 46% cumulative accuracy on near-optimal RRT* paths, while keeping path length within 16% of optimal. We also present an example demonstration where the Adversarial RRT* planner attempts to safely deliver a high value package while an adversary observes the path and tries to intercept the package.
Research Organization:
Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
Sponsoring Organization:
USDOE National Nuclear Security Administration (NNSA)
Grant/Contract Number:
NA0003525
OSTI ID:
1845392
Report Number(s):
SAND2022-0910J; 703237
Journal Information:
IEEE Robotics and Automation Letters, Journal Name: IEEE Robotics and Automation Letters Journal Issue: 2 Vol. 7; ISSN 2377-3766
Publisher:
IEEECopyright Statement
Country of Publication:
United States
Language:
English

References (17)

A Mathematical Theory of Communication journal July 1948
Deceptive robot motion: synthesis, analysis and experiments journal July 2015
Learning Legible Motion from Human–Robot Interactions journal March 2017
Soft + Hardwired attention: An LSTM framework for human trajectory prediction and abnormal event detection journal December 2018
Optimal kinodynamic motion planning using incremental sampling-based methods conference December 2010
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks conference June 2016
Legibility and predictability of robot motion conference March 2013
Goal Recognition in Latent Space conference July 2018
Randomized kinodynamic planning conference January 1999
One Pixel Attack for Fooling Deep Neural Networks journal October 2019
Adversarial machine learning conference January 2011
Sampling-based algorithms for optimal motion planning journal June 2011
An Analysis of Deceptive Robot Motion conference July 2014
Cost-Based Goal Recognition in Navigational Domains journal February 2019
On Curves of Minimal Length with a Constraint on Average Curvature, and with Prescribed Initial and Terminal Positions and Tangents journal July 1957
Deceptive Path-Planning conference August 2017
Heuristic Online Goal Recognition in Continuous Domains conference August 2017

Similar Records

Improved Performance of Asymptotically Optimal Rapidly Exploring Random Trees
Journal Article · Sun Aug 19 20:00:00 EDT 2018 · Journal of Dynamic Systems, Measurement, and Control · OSTI ID:1467259

Using Machine Learning in Adversarial Environments
Technical Report · Sat Oct 01 00:00:00 EDT 2016 · OSTI ID:1563076

Using Machine Learning in Adversarial Environments.
Technical Report · Sun Jan 31 23:00:00 EST 2016 · OSTI ID:1238101