A Survey on Efficient Methods for Adversarial Robustness

Deep learning has revolutionized computer vision with phenomenal success and widespread applications. Despite impressive results in complex problems, neural networks are susceptible to adversarial attacks: small and imperceptible changes in input space that lead these models to incorrect outputs. Ad...

Full description

Bibliographic Details
Main Authors: Awais Muhammad, Sung-Ho Bae
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9926085/
_version_ 1797988695562780672
author Awais Muhammad
Sung-Ho Bae
author_facet Awais Muhammad
Sung-Ho Bae
author_sort Awais Muhammad
collection DOAJ
description Deep learning has revolutionized computer vision with phenomenal success and widespread applications. Despite impressive results in complex problems, neural networks are susceptible to adversarial attacks: small and imperceptible changes in input space that lead these models to incorrect outputs. Adversarial attacks have raised serious concerns, and robustness to these attacks has become a vital issue. Adversarial training, a min-max optimization approach, has shown promise against these attacks. The computational cost of adversarial training, however, makes it prohibitively difficult to scale as well as to be useful in practice. Recently, several works have explored different approaches to make adversarial training computationally more affordable. This paper presents a comprehensive survey on efficient adversarial robustness methods with an aim to present a holistic outlook to make future exploration more systematic and exhaustive. We start by mathematically defining fundamental ideas in adversarially robust learning. We then divide these approaches into two categories based on underlying mechanisms: methods that modify initial adversarial training and techniques that leverage transfer learning to improve efficiency. Finally, based on this overview, we analyze and present an outlook of future directions.
first_indexed 2024-04-11T08:08:18Z
format Article
id doaj.art-98230c1162e74412b6321fa6466cd721
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-04-11T08:08:18Z
publishDate 2022-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-98230c1162e74412b6321fa6466cd7212022-12-22T04:35:27ZengIEEEIEEE Access2169-35362022-01-011011881511883010.1109/ACCESS.2022.32162919926085A Survey on Efficient Methods for Adversarial RobustnessAwais Muhammad0Sung-Ho Bae1https://orcid.org/0000-0003-2677-3186Department of Computer Science and Engineering, Kyung-Hee University, Seoul, South KoreaDepartment of Computer Science and Engineering, Kyung-Hee University, Seoul, South KoreaDeep learning has revolutionized computer vision with phenomenal success and widespread applications. Despite impressive results in complex problems, neural networks are susceptible to adversarial attacks: small and imperceptible changes in input space that lead these models to incorrect outputs. Adversarial attacks have raised serious concerns, and robustness to these attacks has become a vital issue. Adversarial training, a min-max optimization approach, has shown promise against these attacks. The computational cost of adversarial training, however, makes it prohibitively difficult to scale as well as to be useful in practice. Recently, several works have explored different approaches to make adversarial training computationally more affordable. This paper presents a comprehensive survey on efficient adversarial robustness methods with an aim to present a holistic outlook to make future exploration more systematic and exhaustive. We start by mathematically defining fundamental ideas in adversarially robust learning. We then divide these approaches into two categories based on underlying mechanisms: methods that modify initial adversarial training and techniques that leverage transfer learning to improve efficiency. Finally, based on this overview, we analyze and present an outlook of future directions.https://ieeexplore.ieee.org/document/9926085/Neural networksdeep learningefficient trainingadversarial robustnessadversarial attacksadversarial learning
spellingShingle Awais Muhammad
Sung-Ho Bae
A Survey on Efficient Methods for Adversarial Robustness
IEEE Access
Neural networks
deep learning
efficient training
adversarial robustness
adversarial attacks
adversarial learning
title A Survey on Efficient Methods for Adversarial Robustness
title_full A Survey on Efficient Methods for Adversarial Robustness
title_fullStr A Survey on Efficient Methods for Adversarial Robustness
title_full_unstemmed A Survey on Efficient Methods for Adversarial Robustness
title_short A Survey on Efficient Methods for Adversarial Robustness
title_sort survey on efficient methods for adversarial robustness
topic Neural networks
deep learning
efficient training
adversarial robustness
adversarial attacks
adversarial learning
url https://ieeexplore.ieee.org/document/9926085/
work_keys_str_mv AT awaismuhammad asurveyonefficientmethodsforadversarialrobustness
AT sunghobae asurveyonefficientmethodsforadversarialrobustness
AT awaismuhammad surveyonefficientmethodsforadversarialrobustness
AT sunghobae surveyonefficientmethodsforadversarialrobustness