DefenseFea: An Input Transformation Feature Searching Algorithm Based Latent Space for Adversarial Defense
Deep neural networks based image classification systems could suffer from adversarial attack algorithms, which generate input examples by adding deliberately crafted yet imperceptible noise to original input images. These crafted examples can fool systems and further threaten their security. In this...
Main Authors: | Pan Zhang, Yangjie Cao, Chenxi Zhu, Yan Zhuang, Haobo Wang, Jie Li |
---|---|
Format: | Article |
Language: | English |
Published: |
Sciendo
2024-02-01
|
Series: | Foundations of Computing and Decision Sciences |
Subjects: | |
Online Access: | https://doi.org/10.2478/fcds-2024-0002 |
Similar Items
-
ADSAttack: An Adversarial Attack Algorithm via Searching Adversarial Distribution in Latent Space
by: Haobo Wang, et al.
Published: (2023-02-01) -
Adversarial attacks and defenses in deep learning
by: LIU Ximeng, et al.
Published: (2020-10-01) -
Adversarial Attack and Defense on Deep Neural Network-Based Voice Processing Systems: An Overview
by: Xiaojiao Chen, et al.
Published: (2021-09-01) -
Adversarial Attack and Defense Strategies of Speaker Recognition Systems: A Survey
by: Hao Tan, et al.
Published: (2022-07-01) -
Square-Based Black-Box Adversarial Attack on Time Series Classification Using Simulated Annealing and Post-Processing-Based Defense
by: Sichen Liu, et al.
Published: (2024-02-01)