Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW

This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner attack to perturb the original images and anal...

Full description

Bibliographic Details
Main Authors: William Villegas-Ch, Angel Jaramillo-Alcázar, Sergio Luján-Mora
Format: Article
Language:English
Published: MDPI AG 2024-01-01
Series:Big Data and Cognitive Computing
Subjects:
Online Access:https://www.mdpi.com/2504-2289/8/1/8