Simple Black-Box Universal Adversarial Attacks on Deep Neural Networks for Medical Image Classification

Universal adversarial attacks, which hinder most deep neural network (DNN) tasks using only a single perturbation called universal adversarial perturbation (UAP), are a realistic security threat to the practical application of a DNN for medical imaging. Given that computer-based systems are generall...

Full description

Bibliographic Details
Main Authors: Kazuki Koga, Kazuhiro Takemoto
Format: Article
Language:English
Published: MDPI AG 2022-04-01
Series:Algorithms
Subjects:
Online Access:https://www.mdpi.com/1999-4893/15/5/144