On the vulnerability of data-driven structural health monitoring models to adversarial attack
- Industrial Doctorate Centre in Machining Science, The University of Sheffield – Advanced Manufacturing Research Centre with Boeing (AMRC), Rotherham, UK, Dynamics Research Group, The University of Sheffield, Sheffield, UK
- Los Alamos National Laboratory, Los Alamos, NM, USA
Many approaches at the forefront of structural health monitoring rely on cutting-edge techniques from the field of machine learning. Recently, much interest has been directed towards the study of so-called adversarial examples; deliberate input perturbations that deceive machine learning models while remaining semantically identical. This article demonstrates that data-driven approaches to structural health monitoring are vulnerable to attacks of this kind. In the perfect information or ‘white-box’ scenario, a transformation is found that maps every example in the Los Alamos National Laboratory three-storey structure dataset to an adversarial example. Also presented is an adversarial threat model specific to structural health monitoring. The threat model is proposed with a view to motivate discussion into ways in which structural health monitoring approaches might be made more robust to the threat of adversarial attack.
- Research Organization:
- Los Alamos National Laboratory (LANL), Los Alamos, NM (United States)
- Sponsoring Organization:
- USDOE; USDOE National Nuclear Security Administration (NNSA)
- Grant/Contract Number:
- 89233218CNA000001
- OSTI ID:
- 1630947
- Alternate ID(s):
- OSTI ID: 1822742
- Report Number(s):
- LA-UR--19-32456
- Journal Information:
- Structural Health Monitoring, Journal Name: Structural Health Monitoring Journal Issue: 4 Vol. 20; ISSN 1475-9217
- Publisher:
- SAGE PublicationsCopyright Statement
- Country of Publication:
- United Kingdom
- Language:
- English
Similar Records
Transferable Adversarial Attack on 3D Object Tracking in Point Cloud