Image classification adversarial attack with improved resizing transformation and ensemble models
Convolutional neural networks have achieved great success in computer vision, but incorrect predictions would be output when applying intended perturbations on original input. These human-indistinguishable replicas are called adversarial examples, which on this feature can be used to evaluate networ...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
PeerJ Inc.
2023-07-01
|
Series: | PeerJ Computer Science |
Subjects: | |
Online Access: | https://peerj.com/articles/cs-1475.pdf |
_version_ | 1797771196079538176 |
---|---|
author | Chenwei Li Hengwei Zhang Bo Yang Jindong Wang |
author_facet | Chenwei Li Hengwei Zhang Bo Yang Jindong Wang |
author_sort | Chenwei Li |
collection | DOAJ |
description | Convolutional neural networks have achieved great success in computer vision, but incorrect predictions would be output when applying intended perturbations on original input. These human-indistinguishable replicas are called adversarial examples, which on this feature can be used to evaluate network robustness and security. White-box attack success rate is considerable, when already knowing network structure and parameters. But in a black-box attack, the adversarial examples success rate is relatively low and the transferability remains to be improved. This article refers to model augmentation which is derived from data augmentation in training generalizable neural networks, and proposes resizing invariance method. The proposed method introduces improved resizing transformation to achieve model augmentation. In addition, ensemble models are used to generate more transferable adversarial examples. Extensive experiments verify the better performance of this method in comparison to other baseline methods including the original model augmentation method, and the black-box attack success rate is improved on both the normal models and defense models. |
first_indexed | 2024-03-12T21:34:00Z |
format | Article |
id | doaj.art-e449ed9d4eff49dc9ab925ca366c7944 |
institution | Directory Open Access Journal |
issn | 2376-5992 |
language | English |
last_indexed | 2024-03-12T21:34:00Z |
publishDate | 2023-07-01 |
publisher | PeerJ Inc. |
record_format | Article |
series | PeerJ Computer Science |
spelling | doaj.art-e449ed9d4eff49dc9ab925ca366c79442023-07-27T15:05:05ZengPeerJ Inc.PeerJ Computer Science2376-59922023-07-019e147510.7717/peerj-cs.1475Image classification adversarial attack with improved resizing transformation and ensemble modelsChenwei Li0Hengwei Zhang1Bo Yang2Jindong Wang3State Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou, Henan, ChinaState Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou, Henan, ChinaState Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou, Henan, ChinaState Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou, Henan, ChinaConvolutional neural networks have achieved great success in computer vision, but incorrect predictions would be output when applying intended perturbations on original input. These human-indistinguishable replicas are called adversarial examples, which on this feature can be used to evaluate network robustness and security. White-box attack success rate is considerable, when already knowing network structure and parameters. But in a black-box attack, the adversarial examples success rate is relatively low and the transferability remains to be improved. This article refers to model augmentation which is derived from data augmentation in training generalizable neural networks, and proposes resizing invariance method. The proposed method introduces improved resizing transformation to achieve model augmentation. In addition, ensemble models are used to generate more transferable adversarial examples. Extensive experiments verify the better performance of this method in comparison to other baseline methods including the original model augmentation method, and the black-box attack success rate is improved on both the normal models and defense models.https://peerj.com/articles/cs-1475.pdfComputer graphicsAdversarial examplesImage classificationConvolutional neural networksImage transformationImproved resizing |
spellingShingle | Chenwei Li Hengwei Zhang Bo Yang Jindong Wang Image classification adversarial attack with improved resizing transformation and ensemble models PeerJ Computer Science Computer graphics Adversarial examples Image classification Convolutional neural networks Image transformation Improved resizing |
title | Image classification adversarial attack with improved resizing transformation and ensemble models |
title_full | Image classification adversarial attack with improved resizing transformation and ensemble models |
title_fullStr | Image classification adversarial attack with improved resizing transformation and ensemble models |
title_full_unstemmed | Image classification adversarial attack with improved resizing transformation and ensemble models |
title_short | Image classification adversarial attack with improved resizing transformation and ensemble models |
title_sort | image classification adversarial attack with improved resizing transformation and ensemble models |
topic | Computer graphics Adversarial examples Image classification Convolutional neural networks Image transformation Improved resizing |
url | https://peerj.com/articles/cs-1475.pdf |
work_keys_str_mv | AT chenweili imageclassificationadversarialattackwithimprovedresizingtransformationandensemblemodels AT hengweizhang imageclassificationadversarialattackwithimprovedresizingtransformationandensemblemodels AT boyang imageclassificationadversarialattackwithimprovedresizingtransformationandensemblemodels AT jindongwang imageclassificationadversarialattackwithimprovedresizingtransformationandensemblemodels |