Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes
Deep neural networks (DNNs) have useful applications in machine learning tasks involving recognition and pattern analysis. Despite the favorable applications of DNNs, these systems can be exploited by adversarial examples. An adversarial example, which is created by adding a small amount of noise to...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2019-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8727886/ |
_version_ | 1819001545848520704 |
---|---|
author | Hyun Kwon Yongchul Kim Hyunsoo Yoon Daeseon Choi |
author_facet | Hyun Kwon Yongchul Kim Hyunsoo Yoon Daeseon Choi |
author_sort | Hyun Kwon |
collection | DOAJ |
description | Deep neural networks (DNNs) have useful applications in machine learning tasks involving recognition and pattern analysis. Despite the favorable applications of DNNs, these systems can be exploited by adversarial examples. An adversarial example, which is created by adding a small amount of noise to an original sample, can cause misclassification by the DNN. Under specific circumstances, it may be necessary to create a selective untargeted adversarial example that will not be classified as certain avoided classes. Such is the case, for example, if a modified tank cover can cause misclassification by a DNN, but the bandit equipped with the DNN must misclassify the modified tank as a class other than certain avoided classes, such as a tank, armored vehicle, or self-propelled gun. That is, selective untargeted adversarial examples are needed that will not be perceived as certain classes, such as tanks, armored vehicles, or self-propelled guns. In this study, we propose a selective untargeted adversarial example that exhibits 100% attack success with minimum distortions. The proposed scheme creates a selective untargeted adversarial example that will not be classified as certain avoided classes while minimizing distortions in the original sample. To generate untargeted adversarial examples, a transformation is performed to minimize the probability of certain avoided classes and distortions in the original sample. As experimental datasets, we used MNIST and CIFAR-10, including the Tensorflow library. The experimental results demonstrate that the proposed scheme creates a selective untargeted adversarial example that exhibits 100% attack success with minimum distortions (1.325 and 34.762 for MNIST and CIFAR-10, respectively). |
first_indexed | 2024-12-20T22:50:55Z |
format | Article |
id | doaj.art-945f11f2fb4b42cb9e89bca0ee090bf1 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-20T22:50:55Z |
publishDate | 2019-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-945f11f2fb4b42cb9e89bca0ee090bf12022-12-21T19:24:15ZengIEEEIEEE Access2169-35362019-01-017734937350310.1109/ACCESS.2019.29204108727886Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided ClassesHyun Kwon0https://orcid.org/0000-0003-1169-9892Yongchul Kim1Hyunsoo Yoon2Daeseon Choi3School of Computing, Korea Advanced Institute of Science and Technology, Daejeon, South KoreaDepartment of Electrical Engineering, Korea Military Academy, Seoul, South KoreaSchool of Computing, Korea Advanced Institute of Science and Technology, Daejeon, South KoreaDepartment of Medical Information, Kongju National University, Gongju-si, South KoreaDeep neural networks (DNNs) have useful applications in machine learning tasks involving recognition and pattern analysis. Despite the favorable applications of DNNs, these systems can be exploited by adversarial examples. An adversarial example, which is created by adding a small amount of noise to an original sample, can cause misclassification by the DNN. Under specific circumstances, it may be necessary to create a selective untargeted adversarial example that will not be classified as certain avoided classes. Such is the case, for example, if a modified tank cover can cause misclassification by a DNN, but the bandit equipped with the DNN must misclassify the modified tank as a class other than certain avoided classes, such as a tank, armored vehicle, or self-propelled gun. That is, selective untargeted adversarial examples are needed that will not be perceived as certain classes, such as tanks, armored vehicles, or self-propelled guns. In this study, we propose a selective untargeted adversarial example that exhibits 100% attack success with minimum distortions. The proposed scheme creates a selective untargeted adversarial example that will not be classified as certain avoided classes while minimizing distortions in the original sample. To generate untargeted adversarial examples, a transformation is performed to minimize the probability of certain avoided classes and distortions in the original sample. As experimental datasets, we used MNIST and CIFAR-10, including the Tensorflow library. The experimental results demonstrate that the proposed scheme creates a selective untargeted adversarial example that exhibits 100% attack success with minimum distortions (1.325 and 34.762 for MNIST and CIFAR-10, respectively).https://ieeexplore.ieee.org/document/8727886/Machine learningadversarial exampledeep neural network (DNN)avoided classes |
spellingShingle | Hyun Kwon Yongchul Kim Hyunsoo Yoon Daeseon Choi Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes IEEE Access Machine learning adversarial example deep neural network (DNN) avoided classes |
title | Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes |
title_full | Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes |
title_fullStr | Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes |
title_full_unstemmed | Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes |
title_short | Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes |
title_sort | selective untargeted evasion attack an adversarial example that will not be classified as certain avoided classes |
topic | Machine learning adversarial example deep neural network (DNN) avoided classes |
url | https://ieeexplore.ieee.org/document/8727886/ |
work_keys_str_mv | AT hyunkwon selectiveuntargetedevasionattackanadversarialexamplethatwillnotbeclassifiedascertainavoidedclasses AT yongchulkim selectiveuntargetedevasionattackanadversarialexamplethatwillnotbeclassifiedascertainavoidedclasses AT hyunsooyoon selectiveuntargetedevasionattackanadversarialexamplethatwillnotbeclassifiedascertainavoidedclasses AT daeseonchoi selectiveuntargetedevasionattackanadversarialexamplethatwillnotbeclassifiedascertainavoidedclasses |