Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Exploring Robust Features for Improving Adversarial Robustness

Journal Article · · IEEE Transactions on Cybernetics
While deep neural networks (DNNs) have revolutionized many fields, their fragility to carefully designed adversarial attacks impedes the usage of DNNs in safety-critical applications. In this article, we strive to explore the robust features that are not affected by the adversarial perturbations, that is, invariant to the clean image and its adversarial examples (AEs), to improve the model’s adversarial robustness. Specifically, we propose a feature disentanglement model to segregate the robust features from nonrobust features and domain-specific features. Here, the extensive experiments on five widely used datasets with different attacks demonstrate that robust features obtained from our model improve the model’s adversarial robustness compared to the state-of-the-art approaches. Moreover, the trained domain discriminator is able to identify the domain-specific features from the clean images and AEs almost perfectly. This enables AE detection without incurring additional computational costs. With that, we can also specify different classifiers for clean images and AEs, thereby avoiding any drop in clean image accuracy.
Research Organization:
Brookhaven National Laboratory (BNL), Upton, NY (United States)
Sponsoring Organization:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
Grant/Contract Number:
SC0012704
OSTI ID:
2368822
Report Number(s):
BNL--225592-2024-JAAM
Journal Information:
IEEE Transactions on Cybernetics, Journal Name: IEEE Transactions on Cybernetics Journal Issue: 9 Vol. 54; ISSN 2168-2267
Publisher:
IEEECopyright Statement
Country of Publication:
United States
Language:
English

References (37)

Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search book January 2020
Identity Mappings in Deep Residual Networks book January 2016
DRIT++: Diverse Image-to-Image Translation via Disentangled Representations journal February 2020
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks conference June 2016
Robust Physical-World Attacks on Deep Learning Visual Classification conference June 2018
Geometric Robustness of Deep Networks: Analysis and Improvement conference June 2018
Boosting Adversarial Attacks with Momentum conference June 2018
Feature Denoising for Improving Adversarial Robustness conference June 2019
Universal Physical Camouflage Attacks on Object Detectors conference June 2020
Cross-Domain Face Presentation Attack Detection via Multi-Domain Disentangled Representation Learning conference June 2020
Learning to Learn Single Domain Generalization conference June 2020
DRANet: Disentangling Representation and Adaptation Networks for Unsupervised Cross-Domain Adaptation conference June 2021
LAS-AT: Adversarial Training with Learnable Attack Strategy conference June 2022
Enhancing Adversarial Training with Second-Order Statistics of Weights conference June 2022
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization conference October 2017
Adversarial Learning With Margin-Based Triplet Embedding Regularization conference October 2019
AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Metric Learning conference October 2021
Learnable Boundary Guided Adversarial Training conference October 2021
Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff conference October 2023
Towards Evaluating the Robustness of Neural Networks conference May 2017
Adversarial Examples for Hamming Space Search journal April 2020
ROSA: Robust Salient Object Detection Against Adversarial Attacks journal November 2020
Toward a Controllable Disentanglement Network journal April 2022
Daedalus: Breaking Nonmaximum Suppression in Object Detection via Adversarial Examples journal August 2022
Adversarial CAPTCHAs journal July 2022
Point Adversarial Self-Mining: A Simple Method for Facial Expression Recognition journal December 2022
D-BIN: A Generalized Disentangling Batch Instance Normalization for Domain Adaptation journal April 2023
Joint Adversarial Example and False Data Injection Attacks for State Estimation in Power Systems journal December 2022
Adversarial Learning With Cost-Sensitive Classes journal August 2023
A Feature Space-Restricted Attention Attack on Medical Deep Learning Systems journal August 2023
Defense against Adversarial Cloud Attack on Remote Sensing Salient Object Detection conference January 2024
Adversarially Robust Distillation journal April 2020
Adversarial Robustness through Disentangled Representations journal May 2021
Efficient Robust Training via Backward Smoothing journal June 2022
Generalizing to Unseen Domains: A Survey on Domain Generalization conference August 2021
Towards Deep Learning Models Resistant to Adversarial Attacks preprint January 2017
Manitest: Are classifiers really invariant? conference January 2015

Similar Records

XploreNAS: Explore Adversarially Robust and Hardware-efficient Neural Architectures for Non-ideal Xbars
Journal Article · Sun Jul 23 20:00:00 EDT 2023 · ACM Transactions on Embedded Computing Systems · OSTI ID:2422212

Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent
Conference · Fri Apr 03 00:00:00 EDT 2020 · Proceedings of the AAAI Conference on Artificial Intelligence · OSTI ID:1958810

The Effects of Compounded Model Size Reductions on Adversarial Robustness
Conference · Tue Apr 01 00:00:00 EDT 2025 · OSTI ID:3002675