Adversarial Training Methods for Deep Learning: A Systematic Review

Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms. Adversarial training is one of the methods used to defend against the threat of adversarial attacks. It is a training s...

Full description

Bibliographic Details
Main Authors: Weimin Zhao, Sanaa Alwidian, Qusay H. Mahmoud
Format: Article
Language:English
Published: MDPI AG 2022-08-01
Series:Algorithms
Subjects:
Online Access:https://www.mdpi.com/1999-4893/15/8/283

Similar Items