Adversarial Robustness Limits

RESOURCE

Abstract

This is the official code for the ICML 2024 paper "Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies". This code extends that of Wang et al. (2023) to facilitate state-of-the-art CIFAR-10 adversarial robustness, via training of WideResNet models on various large synthetic datasets. The code also facilitates derivation of the various scaling laws put forth in our ICML paper, which we use to compute efficient training settings.
Developers:
Bartoldson, Brian [1]
  1. Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States)
Release Date:
2024-05-21
Project Type:
Open Source, Publicly Available Repository
Software Type:
Scientific
Version:
1.0
Licenses:
MIT License
Sponsoring Org.:
Code ID:
134198
Site Accession Number:
LLNL-CODE-866411
Research Org.:
Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States)
Country of Origin:
United States

RESOURCE

Citation Formats

Bartoldson, Brian R. Adversarial Robustness Limits. Computer Software. https://github.com/bbartoldson/Adversarial-Robustness-Limits. USDOE National Nuclear Security Administration (NNSA). 21 May. 2024. Web. doi:10.11578/dc.20240710.2.
Bartoldson, Brian R. (2024, May 21). Adversarial Robustness Limits. [Computer software]. https://github.com/bbartoldson/Adversarial-Robustness-Limits. https://doi.org/10.11578/dc.20240710.2.
Bartoldson, Brian R. "Adversarial Robustness Limits." Computer software. May 21, 2024. https://github.com/bbartoldson/Adversarial-Robustness-Limits. https://doi.org/10.11578/dc.20240710.2.
@misc{ doecode_134198,
title = {Adversarial Robustness Limits},
author = {Bartoldson, Brian R.},
abstractNote = {This is the official code for the ICML 2024 paper "Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies". This code extends that of Wang et al. (2023) to facilitate state-of-the-art CIFAR-10 adversarial robustness, via training of WideResNet models on various large synthetic datasets. The code also facilitates derivation of the various scaling laws put forth in our ICML paper, which we use to compute efficient training settings.},
doi = {10.11578/dc.20240710.2},
url = {https://doi.org/10.11578/dc.20240710.2},
howpublished = {[Computer Software] \url{https://doi.org/10.11578/dc.20240710.2}},
year = {2024},
month = {may}
}