Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation

Universal adversarial perturbations are image-agnostic and model-independent noise that, when added to any image, can mislead the trained deep convolutional neural networks into the wrong prediction. Since these universal adversarial perturbations can seriously jeopardize the security and integrity...

Full description

Bibliographic Details
Main Authors: Mehdi Sadi, Bashir Mohammad Sabquat Bahar Talukder, Kaniz Mishty, Md Tauhidur Rahman
Format: Article
Language:English
Published: MDPI AG 2023-09-01
Series:Information
Subjects:
Online Access:https://www.mdpi.com/2078-2489/14/9/516