Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that deep le...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9614158/ |
_version_ | 1818843587708715008 |
---|---|
author | Naveed Akhtar Ajmal Mian Navid Kardan Mubarak Shah |
author_facet | Naveed Akhtar Ajmal Mian Navid Kardan Mubarak Shah |
author_sort | Naveed Akhtar |
collection | DOAJ |
description | Deep Learning is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that deep learning is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013, it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In 2018, we published the first-ever review of the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses). Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of our first literature survey, this review article focuses on the advances in this area since 2018. We thoroughly discuss the first generation attacks and comprehensively cover the modern attacks and their defenses appearing in the prestigious sources of computer vision and machine learning research. Besides offering the most comprehensive literature review of adversarial attacks and defenses to date, the article also provides concise definitions of technical terminologies for the non-experts. Finally, it discusses challenges and future outlook of this direction based on the literature since the advent of this research direction. |
first_indexed | 2024-12-19T05:00:15Z |
format | Article |
id | doaj.art-e42f524a20ca4b5192c3b0da7c72e541 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-19T05:00:15Z |
publishDate | 2021-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-e42f524a20ca4b5192c3b0da7c72e5412022-12-21T20:35:07ZengIEEEIEEE Access2169-35362021-01-01915516115519610.1109/ACCESS.2021.31279609614158Advances in Adversarial Attacks and Defenses in Computer Vision: A SurveyNaveed Akhtar0https://orcid.org/0000-0003-3406-673XAjmal Mian1https://orcid.org/0000-0002-5206-3842Navid Kardan2Mubarak Shah3https://orcid.org/0000-0002-8216-1128Department of Computer Science and Software Engineering, The University of Western Australia, Crawley, WA, AustraliaDepartment of Computer Science and Software Engineering, The University of Western Australia, Crawley, WA, AustraliaDepartment of Computer Science, University of Central Florida, Orlando, FL, USADepartment of Computer Science, University of Central Florida, Orlando, FL, USADeep Learning is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that deep learning is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013, it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In 2018, we published the first-ever review of the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses). Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of our first literature survey, this review article focuses on the advances in this area since 2018. We thoroughly discuss the first generation attacks and comprehensively cover the modern attacks and their defenses appearing in the prestigious sources of computer vision and machine learning research. Besides offering the most comprehensive literature review of adversarial attacks and defenses to date, the article also provides concise definitions of technical terminologies for the non-experts. Finally, it discusses challenges and future outlook of this direction based on the literature since the advent of this research direction.https://ieeexplore.ieee.org/document/9614158/Adversarial examplesadversarial defenseadversarial machine learningblack-box attackdeep learningperturbation |
spellingShingle | Naveed Akhtar Ajmal Mian Navid Kardan Mubarak Shah Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey IEEE Access Adversarial examples adversarial defense adversarial machine learning black-box attack deep learning perturbation |
title | Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey |
title_full | Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey |
title_fullStr | Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey |
title_full_unstemmed | Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey |
title_short | Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey |
title_sort | advances in adversarial attacks and defenses in computer vision a survey |
topic | Adversarial examples adversarial defense adversarial machine learning black-box attack deep learning perturbation |
url | https://ieeexplore.ieee.org/document/9614158/ |
work_keys_str_mv | AT naveedakhtar advancesinadversarialattacksanddefensesincomputervisionasurvey AT ajmalmian advancesinadversarialattacksanddefensesincomputervisionasurvey AT navidkardan advancesinadversarialattacksanddefensesincomputervisionasurvey AT mubarakshah advancesinadversarialattacksanddefensesincomputervisionasurvey |