Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW
This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner attack to perturb the original images and anal...
Main Authors: | William Villegas-Ch, Angel Jaramillo-Alcázar, Sergio Luján-Mora |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2024-01-01
|
Series: | Big Data and Cognitive Computing |
Subjects: | |
Online Access: | https://www.mdpi.com/2504-2289/8/1/8 |
Similar Items
-
Three-Dimensional Reconstruction Pre-Training as a Prior to Improve Robustness to Adversarial Attacks and Spurious Correlation
by: Yutaro Yamada, et al.
Published: (2024-03-01) -
A Framework for Robust Deep Learning Models Against Adversarial Attacks Based on a Protection Layer Approach
by: Mohammed Nasser Al-Andoli, et al.
Published: (2024-01-01) -
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
by: Xingjian Li, et al.
Published: (2022-01-01) -
ZeroGrad: Costless conscious remedies for catastrophic overfitting in the FGSM adversarial training
by: Zeinab Golgooni, et al.
Published: (2023-09-01) -
Deep Adversarial Reinforcement Learning Method to Generate Control Policies Robust Against Worst-Case Value Predictions
by: Kohei Ohashi, et al.
Published: (2023-01-01)