Trustworthiness and certified robustness for deep learning

Though Deep Learning (DL) has shown its superiority in many complex computer vision tasks, in recent years, researchers found out that DL-based systems were extremely vulnerable to adversarial attacks. By adding small and human imperceptible corruptions to the original inputs, adversarial attacks wi...

Full description

Bibliographic Details
Main Author: Xia, Song
Other Authors: Yap Kim Hui
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/158769
_version_ 1811681578866180096
author Xia, Song
author2 Yap Kim Hui
author_facet Yap Kim Hui
Xia, Song
author_sort Xia, Song
collection NTU
description Though Deep Learning (DL) has shown its superiority in many complex computer vision tasks, in recent years, researchers found out that DL-based systems were extremely vulnerable to adversarial attacks. By adding small and human imperceptible corruptions to the original inputs, adversarial attacks will generate adversarial examples, which, though being very similar to original inputs, could mislead DL with a highly successful rate. Randomized smoothing (RS) is a recently proposed method to provide the certified robustness for DL, which could guarantee any adversarial attack ineffective within a certain range. By using Gaussian estimation, Randomized Smoothing (RS) gives the worst-case decision boundary of DL towards all possible adversarial attacks. Under the worst-case situation, RS gives a certified robustness radius, within which, DL system is guaranteed to return a constant prediction, meaning that no adversarial attack can be effective. Currently, there are two problems in RS. First is that the optimization of directly maximizing certified robustness radius is non-differentiable, due to hard 0-1 mapping and Monte Carlo sampling. The second is that the useful information from original data is corrupted, due to high variance level Gaussian noise. To solve above problems, this dissertation first analyzes current robustness estimation optimization methods and proposes a new generalized consistency optimization, which consists of a looser accuracy item and a tighter robustness item. Meanwhile, this dissertation utilizes linear decomposition to decompose the data according to the value of co-variance and select the useful information. Experiment results show that our proposed generalized consistency optimization with linear decomposition outperforms previous methods and achieves new state-of-the-art results.
first_indexed 2024-10-01T03:43:11Z
format Thesis-Master by Coursework
id ntu-10356/158769
institution Nanyang Technological University
language English
last_indexed 2024-10-01T03:43:11Z
publishDate 2022
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1587692022-05-31T04:36:27Z Trustworthiness and certified robustness for deep learning Xia, Song Yap Kim Hui School of Electrical and Electronic Engineering ekhyap@ntu.edu.sg Engineering::Electrical and electronic engineering Though Deep Learning (DL) has shown its superiority in many complex computer vision tasks, in recent years, researchers found out that DL-based systems were extremely vulnerable to adversarial attacks. By adding small and human imperceptible corruptions to the original inputs, adversarial attacks will generate adversarial examples, which, though being very similar to original inputs, could mislead DL with a highly successful rate. Randomized smoothing (RS) is a recently proposed method to provide the certified robustness for DL, which could guarantee any adversarial attack ineffective within a certain range. By using Gaussian estimation, Randomized Smoothing (RS) gives the worst-case decision boundary of DL towards all possible adversarial attacks. Under the worst-case situation, RS gives a certified robustness radius, within which, DL system is guaranteed to return a constant prediction, meaning that no adversarial attack can be effective. Currently, there are two problems in RS. First is that the optimization of directly maximizing certified robustness radius is non-differentiable, due to hard 0-1 mapping and Monte Carlo sampling. The second is that the useful information from original data is corrupted, due to high variance level Gaussian noise. To solve above problems, this dissertation first analyzes current robustness estimation optimization methods and proposes a new generalized consistency optimization, which consists of a looser accuracy item and a tighter robustness item. Meanwhile, this dissertation utilizes linear decomposition to decompose the data according to the value of co-variance and select the useful information. Experiment results show that our proposed generalized consistency optimization with linear decomposition outperforms previous methods and achieves new state-of-the-art results. Master of Science (Signal Processing) 2022-05-31T04:36:26Z 2022-05-31T04:36:26Z 2022 Thesis-Master by Coursework Xia, S. (2022). Trustworthiness and certified robustness for deep learning. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158769 https://hdl.handle.net/10356/158769 en ICP1900093 application/pdf Nanyang Technological University
spellingShingle Engineering::Electrical and electronic engineering
Xia, Song
Trustworthiness and certified robustness for deep learning
title Trustworthiness and certified robustness for deep learning
title_full Trustworthiness and certified robustness for deep learning
title_fullStr Trustworthiness and certified robustness for deep learning
title_full_unstemmed Trustworthiness and certified robustness for deep learning
title_short Trustworthiness and certified robustness for deep learning
title_sort trustworthiness and certified robustness for deep learning
topic Engineering::Electrical and electronic engineering
url https://hdl.handle.net/10356/158769
work_keys_str_mv AT xiasong trustworthinessandcertifiedrobustnessfordeeplearning