Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

The Effects of Compounded Model Size Reductions on Adversarial Robustness

Conference ·
Recent advances in Edge AI and Tiny Machine Learning (TinyML) have enabled the deployment of machine learning models on resource-constrained environments. However, deploying these models on edge devices, such as micro-controllers, requires significant model footprint reduction through a variety of techniques such as quantization, pruning, and clustering. While these optimization methods offer considerable advantages, they potentially introduce AI-related security vulnerabilities, particularly concerning model robustness with respect to adversarial AI attacks. Prior research has extensively examined the impact of quantization on adversarial robustness; however, the effects of alternative reduction techniques and their combinations remain understudied. This paper investigates the impact of model size reduction techniques on adversarial robustness, when applied individually and combined. We utilized Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks to generate adversarial perturbations for both training and testing data, and then evaluated the models' accuracy under adversarial training conditions. Our findings revealed that reduction techniques generally diminished robustness; although, combining techniques was not found to make robustness any worse than when applied individually. Moreover, specific techniques can potentially enhance resistance to small size perturbations. This research provides insights into the trade-offs between model size reduction and security, establishing a foundation for future investigations into improving adversarial training techniques and methodologies for maintaining robustness while preserving memory footprint benefits.
Research Organization:
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Organization:
USDOE
DOE Contract Number:
AC05-00OR22725;
OSTI ID:
3002675
Resource Type:
Conference paper/presentation
Conference Information:
The 18th IEEE Dallas Circuits and Systems Conference (DCAS) - Dallas, Texas, United States of America - 4/11/2025-4/13/2025
Country of Publication:
United States
Language:
English

Similar Records

Defending Against Adversarial Examples
Technical Report · Sun Sep 01 00:00:00 EDT 2019 · OSTI ID:1569514

Cognitive IoT and Edge Computing for Intrusion Detection with Federated TinyML
Conference · Wed Apr 30 20:00:00 EDT 2025 · OSTI ID:3002487

On the vulnerability of data-driven structural health monitoring models to adversarial attack
Journal Article · Mon May 25 20:00:00 EDT 2020 · Structural Health Monitoring · OSTI ID:1630947

Related Subjects