ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers
The vulnerability of deep neural network (DNN)-based systems makes them susceptible to adversarial perturbation and may cause classification task failure. In this work, we propose an adversarial attack model using the Artificial Bee Colony (ABC) algorithm to generate adversarial samples without the...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-03-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/24/3/412 |
_version_ | 1797446651511570432 |
---|---|
author | Han Cao Chengxiang Si Qindong Sun Yanxiao Liu Shancang Li Prosanta Gope |
author_facet | Han Cao Chengxiang Si Qindong Sun Yanxiao Liu Shancang Li Prosanta Gope |
author_sort | Han Cao |
collection | DOAJ |
description | The vulnerability of deep neural network (DNN)-based systems makes them susceptible to adversarial perturbation and may cause classification task failure. In this work, we propose an adversarial attack model using the Artificial Bee Colony (ABC) algorithm to generate adversarial samples without the need for a further gradient evaluation and training of the substitute model, which can further improve the chance of task failure caused by adversarial perturbation. In untargeted attacks, the proposed method obtained 100%, 98.6%, and 90.00% success rates on the MNIST, CIFAR-10 and ImageNet datasets, respectively. The experimental results show that the proposed ABCAttack can not only obtain a high attack success rate with fewer queries in the black-box setting, but also break some existing defenses to a large extent, and is not limited by model structure or size, which provides further research directions for deep learning evasion attacks and defenses. |
first_indexed | 2024-03-09T13:43:36Z |
format | Article |
id | doaj.art-c4fffd355f1a4b4789a213415843d240 |
institution | Directory Open Access Journal |
issn | 1099-4300 |
language | English |
last_indexed | 2024-03-09T13:43:36Z |
publishDate | 2022-03-01 |
publisher | MDPI AG |
record_format | Article |
series | Entropy |
spelling | doaj.art-c4fffd355f1a4b4789a213415843d2402023-11-30T21:03:18ZengMDPI AGEntropy1099-43002022-03-0124341210.3390/e24030412ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image ClassifiersHan Cao0Chengxiang Si1Qindong Sun2Yanxiao Liu3Shancang Li4Prosanta Gope5Key Laboratory of Network Computing and Security, Xi’an University of Technology, Xi’an 710048, ChinaNational Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT/CC), Beijing 100029, ChinaKey Laboratory of Network Computing and Security, Xi’an University of Technology, Xi’an 710048, ChinaKey Laboratory of Network Computing and Security, Xi’an University of Technology, Xi’an 710048, ChinaThe Department of Computer Science and Creative Technology, University of the West of England, Bristol BS16 1QY, UKDepartment of Computer Science, University of Sheffield, Sheffield S10 2TN, UKThe vulnerability of deep neural network (DNN)-based systems makes them susceptible to adversarial perturbation and may cause classification task failure. In this work, we propose an adversarial attack model using the Artificial Bee Colony (ABC) algorithm to generate adversarial samples without the need for a further gradient evaluation and training of the substitute model, which can further improve the chance of task failure caused by adversarial perturbation. In untargeted attacks, the proposed method obtained 100%, 98.6%, and 90.00% success rates on the MNIST, CIFAR-10 and ImageNet datasets, respectively. The experimental results show that the proposed ABCAttack can not only obtain a high attack success rate with fewer queries in the black-box setting, but also break some existing defenses to a large extent, and is not limited by model structure or size, which provides further research directions for deep learning evasion attacks and defenses.https://www.mdpi.com/1099-4300/24/3/412deep neural networksadversarial examplesimage classificationinformation securityblack-box attack |
spellingShingle | Han Cao Chengxiang Si Qindong Sun Yanxiao Liu Shancang Li Prosanta Gope ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers Entropy deep neural networks adversarial examples image classification information security black-box attack |
title | ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers |
title_full | ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers |
title_fullStr | ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers |
title_full_unstemmed | ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers |
title_short | ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers |
title_sort | abcattack a gradient free optimization black box attack for fooling deep image classifiers |
topic | deep neural networks adversarial examples image classification information security black-box attack |
url | https://www.mdpi.com/1099-4300/24/3/412 |
work_keys_str_mv | AT hancao abcattackagradientfreeoptimizationblackboxattackforfoolingdeepimageclassifiers AT chengxiangsi abcattackagradientfreeoptimizationblackboxattackforfoolingdeepimageclassifiers AT qindongsun abcattackagradientfreeoptimizationblackboxattackforfoolingdeepimageclassifiers AT yanxiaoliu abcattackagradientfreeoptimizationblackboxattackforfoolingdeepimageclassifiers AT shancangli abcattackagradientfreeoptimizationblackboxattackforfoolingdeepimageclassifiers AT prosantagope abcattackagradientfreeoptimizationblackboxattackforfoolingdeepimageclassifiers |