Evolving Architectures With Gradient Misalignment Toward Low Adversarial Transferability
Deep neural network image classifiers are known to be susceptible, not only to adversarial examples created for them, but also to those created for others. This phenomenon poses a potential security risk in various black-box systems that rely on image classifiers. One of the observations on networks...
Main Authors: | Kevin Richard G. Operiano, Wanchalerm Pora, Hitoshi Iba, Hiroshi Kera |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9646964/ |
Similar Items
-
Maxwell’s Demon in MLP-Mixer: towards transferable adversarial attacks
by: Haoran Lyu, et al.
Published: (2024-03-01) -
Comprehensive comparisons of gradient-based multi-label adversarial attacks
by: Zhijian Chen, et al.
Published: (2024-06-01) -
Enhancing adversarial transferability with local transformation
by: Yang Zhang, et al.
Published: (2024-11-01) -
CommanderUAP: a practical and transferable universal adversarial attacks on speech recognition models
by: Zheng Sun, et al.
Published: (2024-06-01) -
A Framework for Robust Deep Learning Models Against Adversarial Attacks Based on a Protection Layer Approach
by: Mohammed Nasser Al-Andoli, et al.
Published: (2024-01-01)