Safeguards-Informed Hybrid Imagery Dataset [Poster]
- Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Deep Learning computer vision models require many thousands of properly labelled images for training, which is especially challenging for safeguards and nonproliferation, given that safeguards-relevant images are typically rare due to the sensitivity and limited availability of the technologies. Creating relevant images through real-world staging is costly and limiting in scope. Expert-labeling is expensive, time consuming, and error prone. We aim to develop a data set of both realworld and synthetic images that are relevant to the nuclear safeguards domain that can be used to support multiple data science research questions. In the process of developing this data, we aim to develop a novel workflow to validate synthetic images using machine learning explainability methods, testing among multiple computer vision algorithms, and iterative synthetic data rendering. We will deliver one million images – both real-world and synthetically rendered – of two types uranium storage and transportation containers with labelled ground truth and associated adversarial examples.
- Research Organization:
- Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
- Sponsoring Organization:
- USDOE National Nuclear Security Administration (NNSA), Office of Defense Nuclear Nonproliferation
- DOE Contract Number:
- NA0003525
- OSTI ID:
- 1884463
- Report Number(s):
- SAND2021-4693C; 698994
- Resource Relation:
- Conference: Proposed for Nuclear Security Applications Research & Development Program Review Meeting (NSARD 21), Held Virtually (United States), 19-22 April 2021
- Country of Publication:
- United States
- Language:
- English
Similar Records
How Low Can You Go? Using Synthetic 3D Imagery to Drastically Reduce Real-World Training Data for Object Detection
Boundary-Aware Adversarial Learning Domain Adaption and Active Learning for Cross-Sensor Building Extraction