Adversarial Attack Using Sparse Representation of Feature Maps

Deep neural networks can be fooled by small imperceptible perturbations called adversarial examples. Although these examples are carefully crafted, they involve two major concerns. In some cases, adversarial examples generated are much larger than minimal adversarial perturbations while in others th...

Full description

Bibliographic Details
Main Authors: Maham Jahangir, Faisal Shafait
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9953083/