Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty

Abstract Although state-of-the-art deep neural network models are known to be robust to random perturbations, it was verified that these architectures are indeed quite vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to dep...

Full description

Bibliographic Details
Main Authors: Omer Faruk Tuna, Ferhat Ozgur Catak, M. Taner Eskil
Format: Article
Language:English
Published: Springer 2022-04-01
Series:Complex & Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s40747-022-00701-0
_version_ 1797769235102957568
author Omer Faruk Tuna
Ferhat Ozgur Catak
M. Taner Eskil
author_facet Omer Faruk Tuna
Ferhat Ozgur Catak
M. Taner Eskil
author_sort Omer Faruk Tuna
collection DOAJ
description Abstract Although state-of-the-art deep neural network models are known to be robust to random perturbations, it was verified that these architectures are indeed quite vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy deep neural network models in the areas where security is a critical concern. In recent years, many research studies have been conducted to develop new attack methods and come up with new defense techniques that enable more robust and reliable models. In this study, we use the quantified epistemic uncertainty obtained from the model’s final probability outputs, along with the model’s own loss function, to generate more effective adversarial samples. And we propose a novel defense approach against attacks like Deepfool which result in adversarial samples located near the model’s decision boundary. We have verified the effectiveness of our attack method on MNIST (Digit), MNIST (Fashion) and CIFAR-10 datasets. In our experiments, we showed that our proposed uncertainty-based reversal method achieved a worst case success rate of around 95% without compromising clean accuracy.
first_indexed 2024-03-12T21:06:03Z
format Article
id doaj.art-6588075536134f8e807216f01a3358fd
institution Directory Open Access Journal
issn 2199-4536
2198-6053
language English
last_indexed 2024-03-12T21:06:03Z
publishDate 2022-04-01
publisher Springer
record_format Article
series Complex & Intelligent Systems
spelling doaj.art-6588075536134f8e807216f01a3358fd2023-07-30T11:27:50ZengSpringerComplex & Intelligent Systems2199-45362198-60532022-04-01943739375710.1007/s40747-022-00701-0Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertaintyOmer Faruk Tuna0Ferhat Ozgur Catak1M. Taner Eskil2Isik UniversityUniversity of StavangerIsik UniversityAbstract Although state-of-the-art deep neural network models are known to be robust to random perturbations, it was verified that these architectures are indeed quite vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy deep neural network models in the areas where security is a critical concern. In recent years, many research studies have been conducted to develop new attack methods and come up with new defense techniques that enable more robust and reliable models. In this study, we use the quantified epistemic uncertainty obtained from the model’s final probability outputs, along with the model’s own loss function, to generate more effective adversarial samples. And we propose a novel defense approach against attacks like Deepfool which result in adversarial samples located near the model’s decision boundary. We have verified the effectiveness of our attack method on MNIST (Digit), MNIST (Fashion) and CIFAR-10 datasets. In our experiments, we showed that our proposed uncertainty-based reversal method achieved a worst case success rate of around 95% without compromising clean accuracy.https://doi.org/10.1007/s40747-022-00701-0Adversarial Machine LearningUncertaintySecurityDeep Learning
spellingShingle Omer Faruk Tuna
Ferhat Ozgur Catak
M. Taner Eskil
Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty
Complex & Intelligent Systems
Adversarial Machine Learning
Uncertainty
Security
Deep Learning
title Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty
title_full Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty
title_fullStr Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty
title_full_unstemmed Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty
title_short Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty
title_sort uncertainty as a swiss army knife new adversarial attack and defense ideas based on epistemic uncertainty
topic Adversarial Machine Learning
Uncertainty
Security
Deep Learning
url https://doi.org/10.1007/s40747-022-00701-0
work_keys_str_mv AT omerfaruktuna uncertaintyasaswissarmyknifenewadversarialattackanddefenseideasbasedonepistemicuncertainty
AT ferhatozgurcatak uncertaintyasaswissarmyknifenewadversarialattackanddefenseideasbasedonepistemicuncertainty
AT mtanereskil uncertaintyasaswissarmyknifenewadversarialattackanddefenseideasbasedonepistemicuncertainty