ADSAttack: An Adversarial Attack Algorithm via Searching Adversarial Distribution in Latent Space

Deep neural networks are susceptible to interference from deliberately crafted noise, which can lead to incorrect classification results. Existing approaches make less use of latent space information and conduct pixel-domain modification in the input space instead, which increases the computational...

Full description

Bibliographic Details
Main Authors: Haobo Wang, Chenxi Zhu, Yangjie Cao, Yan Zhuang, Jie Li, Xianfu Chen
Format: Article
Language:English
Published: MDPI AG 2023-02-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/12/4/816
Description
Summary:Deep neural networks are susceptible to interference from deliberately crafted noise, which can lead to incorrect classification results. Existing approaches make less use of latent space information and conduct pixel-domain modification in the input space instead, which increases the computational cost and decreases the transferability. In this work, we propose an effective adversarial distribution searching-driven attack (ADSAttack) algorithm to generate adversarial examples against deep neural networks. ADSAttack introduces an affiliated network to search for potential distributions in image latent space for synthesizing adversarial examples. ADSAttack uses an edge-detection algorithm to locate low-level feature mapping in input space to sketch the minimum effective disturbed area. Experimental results demonstrate that ADSAttack achieves higher transferability, better imperceptible visualization, and faster generation speed compared to traditional algorithms. To generate 1000 adversarial examples, ADSAttack takes <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>11.08</mn><mspace width="3.33333pt"></mspace><mi>s</mi></mrow></semantics></math></inline-formula> and, on average, achieves a success rate of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>98.01</mn><mo>%</mo></mrow></semantics></math></inline-formula>.
ISSN:2079-9292