Adversarial Robustness Limits
- Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States)
This is the official code for the ICML 2024 paper "Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies". This code extends that of Wang et al. (2023) to facilitate state-of-the-art CIFAR-10 adversarial robustness, via training of WideResNet models on various large synthetic datasets. The code also facilitates derivation of the various scaling laws put forth in our ICML paper, which we use to compute efficient training settings.
- Short Name / Acronym:
- ARL
- Site Accession Number:
- LLNL-CODE-866411
- Software Type:
- Scientific
- License(s):
- MIT License
- Research Organization:
- Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States)
- Sponsoring Organization:
- USDOE National Nuclear Security Administration (NNSA)Primary Award/Contract Number:AC52-07NA27344
- DOE Contract Number:
- AC52-07NA27344
- Code ID:
- 134198
- OSTI ID:
- code-134198
- Country of Origin:
- United States
Similar Records
Double Visual Defense
Out of distribution evaluation for neural image compression
Topological Signatures of Adversaries in Multimodal Alignments
Software
·
Wed Dec 11 19:00:00 EST 2024
·
OSTI ID:code-154189
Out of distribution evaluation for neural image compression
Software
·
Thu Aug 31 20:00:00 EDT 2023
·
OSTI ID:code-115924
Topological Signatures of Adversaries in Multimodal Alignments
Software
·
Sun Nov 30 19:00:00 EST 2025
·
OSTI ID:code-171703