Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks

Deep learning, especially deep neural networks (DNNs), is at the heart of the current rise of artificial intelligence, and the major breakthroughs in the last few years have been made by DNNs. It has been demonstrated in recent works that DNNs are vulnerable to human-crafted adversarial examples, wh...

Full description

Bibliographic Details
Main Author: Bai, Tao
Other Authors: Jun Zhao
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/160963
_version_ 1826124921229541376
author Bai, Tao
author2 Jun Zhao
author_facet Jun Zhao
Bai, Tao
author_sort Bai, Tao
collection NTU
description Deep learning, especially deep neural networks (DNNs), is at the heart of the current rise of artificial intelligence, and the major breakthroughs in the last few years have been made by DNNs. It has been demonstrated in recent works that DNNs are vulnerable to human-crafted adversarial examples, which look normal in human eyes. Such adversarial instances can fool and mislead DNNs to misbehave as adversaries expected, causing serious consequences for various DNN-based applications in our daily life. To this end, this thesis dedicates to revealing the vulnerabilities of deep learning algorithms and developing defense strategies for combating adversaries effectively. We study current DNNs from the perspective of security with two sides: attack and defense. On the attack front, we explore the possibility of attacks against DNNs during test time with two types of adversarial examples: adversarial perturbations and adversarial patches. On the defense front, we develop solutions to defend against adversarial examples and investigate the robustness-preserving distillation techniques.
first_indexed 2024-10-01T06:28:16Z
format Thesis-Doctor of Philosophy
id ntu-10356/160963
institution Nanyang Technological University
language English
last_indexed 2024-10-01T06:28:16Z
publishDate 2022
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1609632022-09-01T02:33:19Z Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks Bai, Tao Jun Zhao Wen Bihan School of Computer Science and Engineering junzhao@ntu.edu.sg, bihan.wen@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Deep learning, especially deep neural networks (DNNs), is at the heart of the current rise of artificial intelligence, and the major breakthroughs in the last few years have been made by DNNs. It has been demonstrated in recent works that DNNs are vulnerable to human-crafted adversarial examples, which look normal in human eyes. Such adversarial instances can fool and mislead DNNs to misbehave as adversaries expected, causing serious consequences for various DNN-based applications in our daily life. To this end, this thesis dedicates to revealing the vulnerabilities of deep learning algorithms and developing defense strategies for combating adversaries effectively. We study current DNNs from the perspective of security with two sides: attack and defense. On the attack front, we explore the possibility of attacks against DNNs during test time with two types of adversarial examples: adversarial perturbations and adversarial patches. On the defense front, we develop solutions to defend against adversarial examples and investigate the robustness-preserving distillation techniques. Doctor of Philosophy 2022-08-11T01:54:38Z 2022-08-11T01:54:38Z 2022 Thesis-Doctor of Philosophy Bai, T. (2022). Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/160963 https://hdl.handle.net/10356/160963 10.32657/10356/160963 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Bai, Tao
Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
title Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
title_full Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
title_fullStr Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
title_full_unstemmed Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
title_short Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
title_sort exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
url https://hdl.handle.net/10356/160963
work_keys_str_mv AT baitao exploringthevulnerabilitiesandenhancingtheadversarialrobustnessofdeepneuralnetworks