Exploring Robust Features for Improving Adversarial Robustness
Journal Article
·
· IEEE Transactions on Cybernetics
- Brookhaven National Laboratory (BNL), Upton, NY (United States)
- Stony Brook Univ., NY (United States)
While deep neural networks (DNNs) have revolutionized many fields, their fragility to carefully designed adversarial attacks impedes the usage of DNNs in safety-critical applications. In this article, we strive to explore the robust features that are not affected by the adversarial perturbations, that is, invariant to the clean image and its adversarial examples (AEs), to improve the model’s adversarial robustness. Specifically, we propose a feature disentanglement model to segregate the robust features from nonrobust features and domain-specific features. Here, the extensive experiments on five widely used datasets with different attacks demonstrate that robust features obtained from our model improve the model’s adversarial robustness compared to the state-of-the-art approaches. Moreover, the trained domain discriminator is able to identify the domain-specific features from the clean images and AEs almost perfectly. This enables AE detection without incurring additional computational costs. With that, we can also specify different classifiers for clean images and AEs, thereby avoiding any drop in clean image accuracy.
- Research Organization:
- Brookhaven National Laboratory (BNL), Upton, NY (United States)
- Sponsoring Organization:
- USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
- Grant/Contract Number:
- SC0012704
- OSTI ID:
- 2368822
- Report Number(s):
- BNL--225592-2024-JAAM
- Journal Information:
- IEEE Transactions on Cybernetics, Journal Name: IEEE Transactions on Cybernetics Journal Issue: 9 Vol. 54; ISSN 2168-2267
- Publisher:
- IEEECopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
XploreNAS: Explore Adversarially Robust and Hardware-efficient Neural Architectures for Non-ideal Xbars
Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent
The Effects of Compounded Model Size Reductions on Adversarial Robustness
Journal Article
·
Sun Jul 23 20:00:00 EDT 2023
· ACM Transactions on Embedded Computing Systems
·
OSTI ID:2422212
Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent
Conference
·
Fri Apr 03 00:00:00 EDT 2020
· Proceedings of the AAAI Conference on Artificial Intelligence
·
OSTI ID:1958810
The Effects of Compounded Model Size Reductions on Adversarial Robustness
Conference
·
Tue Apr 01 00:00:00 EDT 2025
·
OSTI ID:3002675