Understanding the Energy vs. Adversarial Robustness Trade-Off in Deep Neural Networks

Adversarial examples, which are crafted by adding small perturbations to typical inputs in order to fool the prediction of a deep neural network (DNN), pose a threat to security-critical applications, and robustness against adversarial examples is becoming an important design factor. In this work, w...

Full description

Bibliographic Details
Main Authors: Kyungmi Lee, Anantha P. Chandrakasan
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Open Journal of Circuits and Systems
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9645046/
_version_ 1818701970658033664
author Kyungmi Lee
Anantha P. Chandrakasan
author_facet Kyungmi Lee
Anantha P. Chandrakasan
author_sort Kyungmi Lee
collection DOAJ
description Adversarial examples, which are crafted by adding small perturbations to typical inputs in order to fool the prediction of a deep neural network (DNN), pose a threat to security-critical applications, and robustness against adversarial examples is becoming an important design factor. In this work, we first examine the methodology for evaluating adversarial robustness that uses the first-order attack methods, and analyze three cases when this evaluation methodology overestimates robustness: 1) numerical saturation of cross-entropy loss, 2) non-differentiable functions in DNNs, and 3) ineffective initialization of the attack methods. For each case, we propose compensation methods that can be easily combined with the existing attack methods, thus provide a more precise evaluation methodology for robustness. Second, we benchmark the relationship between adversarial robustness and inference-time energy at an embedded hardware platform using our proposed evaluation methodology, and demonstrate that this relationship can be obscured by the three cases behind overestimation. Finally, we examine the gap between robustness measured with the attack methods and the verification methods, and show this gap is reduced by our proposed compensation methods. Overall, our work shows that the robustness-energy trade-off has differences from the conventional accuracy-energy trade-off, and highlights importance of the precise evaluation methodology.
first_indexed 2024-12-17T15:29:18Z
format Article
id doaj.art-12ef1808aaca433ab0a51b698d8e013d
institution Directory Open Access Journal
issn 2644-1225
language English
last_indexed 2024-12-17T15:29:18Z
publishDate 2021-01-01
publisher IEEE
record_format Article
series IEEE Open Journal of Circuits and Systems
spelling doaj.art-12ef1808aaca433ab0a51b698d8e013d2022-12-21T21:43:12ZengIEEEIEEE Open Journal of Circuits and Systems2644-12252021-01-01284385510.1109/OJCAS.2021.31162449645046Understanding the Energy vs. Adversarial Robustness Trade-Off in Deep Neural NetworksKyungmi Lee0https://orcid.org/0000-0001-6406-9515Anantha P. Chandrakasan1https://orcid.org/0000-0002-5977-2748Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USADepartment of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USAAdversarial examples, which are crafted by adding small perturbations to typical inputs in order to fool the prediction of a deep neural network (DNN), pose a threat to security-critical applications, and robustness against adversarial examples is becoming an important design factor. In this work, we first examine the methodology for evaluating adversarial robustness that uses the first-order attack methods, and analyze three cases when this evaluation methodology overestimates robustness: 1) numerical saturation of cross-entropy loss, 2) non-differentiable functions in DNNs, and 3) ineffective initialization of the attack methods. For each case, we propose compensation methods that can be easily combined with the existing attack methods, thus provide a more precise evaluation methodology for robustness. Second, we benchmark the relationship between adversarial robustness and inference-time energy at an embedded hardware platform using our proposed evaluation methodology, and demonstrate that this relationship can be obscured by the three cases behind overestimation. Finally, we examine the gap between robustness measured with the attack methods and the verification methods, and show this gap is reduced by our proposed compensation methods. Overall, our work shows that the robustness-energy trade-off has differences from the conventional accuracy-energy trade-off, and highlights importance of the precise evaluation methodology.https://ieeexplore.ieee.org/document/9645046/Convolutional neural networksdeep neural networkshardware acceleration of machine learning algorithmsneural networkslearning systems
spellingShingle Kyungmi Lee
Anantha P. Chandrakasan
Understanding the Energy vs. Adversarial Robustness Trade-Off in Deep Neural Networks
IEEE Open Journal of Circuits and Systems
Convolutional neural networks
deep neural networks
hardware acceleration of machine learning algorithms
neural networks
learning systems
title Understanding the Energy vs. Adversarial Robustness Trade-Off in Deep Neural Networks
title_full Understanding the Energy vs. Adversarial Robustness Trade-Off in Deep Neural Networks
title_fullStr Understanding the Energy vs. Adversarial Robustness Trade-Off in Deep Neural Networks
title_full_unstemmed Understanding the Energy vs. Adversarial Robustness Trade-Off in Deep Neural Networks
title_short Understanding the Energy vs. Adversarial Robustness Trade-Off in Deep Neural Networks
title_sort understanding the energy vs adversarial robustness trade off in deep neural networks
topic Convolutional neural networks
deep neural networks
hardware acceleration of machine learning algorithms
neural networks
learning systems
url https://ieeexplore.ieee.org/document/9645046/
work_keys_str_mv AT kyungmilee understandingtheenergyvsadversarialrobustnesstradeoffindeepneuralnetworks
AT ananthapchandrakasan understandingtheenergyvsadversarialrobustnesstradeoffindeepneuralnetworks