Defending Against Local Adversarial Attacks through Empirical Gradient Optimization
Deep neural networks (DNNs) are susceptible to adversarial attacks, including the recently introduced locally visible adversarial patch attack, which achieves a success rate exceeding 96%. These attacks pose significant challenges to DNN security. Various defense methods, such as adversarial trainin...
Main Authors: | Boyang Sun, Xiaoxuan Ma, Hengyou Wang |
---|---|
Format: | Article |
Language: | English |
Published: |
Faculty of Mechanical Engineering in Slavonski Brod, Faculty of Electrical Engineering in Osijek, Faculty of Civil Engineering in Osijek
2023-01-01
|
Series: | Tehnički Vjesnik |
Subjects: | |
Online Access: | https://hrcak.srce.hr/file/446408 |
Similar Items
-
Double adversarial attack against license plate recognition system
by: Xianyi CHEN, et al.
Published: (2023-06-01) -
Double adversarial attack against license plate recognition system
by: Xianyi CHEN, Jun GU1, Kai YAN, Dong JIANG, Linfeng XU, Zhangjie FU
Published: (2023-06-01) -
Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks
by: Ren-Hung Hwang, et al.
Published: (2023-01-01) -
On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification
by: Sanglee Park, et al.
Published: (2020-11-01) -
SAAM: Stealthy Adversarial Attack on Monocular Depth Estimation
by: Amira Guesmi, et al.
Published: (2024-01-01)