Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks

Recent works demonstrated that imperceptible perturbations to input data, known as adversarial examples, can mislead neural networks’ output. Moreover, the same adversarial sample can be transferable and used to fool different neural models. Such vulnerabilities impede the use of neural networks in...

Full description

Bibliographic Details
Main Authors: Kamilya Smagulova, Lina Bacha, Mohammed E. Fouda, Rouwaida Kanj, Ahmed Eltawil
Format: Article
Language:English
Published: MDPI AG 2024-01-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/13/3/592