Exploiting the Vulnerability of Deep Learning-Based Artificial Intelligence Models in Medical Imaging: Adversarial Attacks
Due to rapid developments in the deep learning model, artificial intelligence (AI) models are expected to enhance clinical diagnostic ability and work efficiency by assisting physicians. Therefore, many hospitals and private companies are competing to develop AI-based automatic diagnostic systems us...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
The Korean Society of Radiology
2019-03-01
|
Series: | 대한영상의학회지 |
Subjects: | |
Online Access: | https://doi.org/10.3348/jksr.2019.80.2.259 |
Summary: | Due to rapid developments in the deep learning model, artificial intelligence (AI) models are expected to enhance clinical diagnostic ability and work efficiency by assisting physicians. Therefore, many hospitals and private companies are competing to develop AI-based automatic diagnostic systems using medical images. In the near future, many deep learning-based automatic
diagnostic systems would be used clinically. However, the possibility of adversarial attacks exploiting certain vulnerabilities of the deep learning algorithm is a major obstacle to deploying
deep learning-based systems in clinical practice. In this paper, we will examine in detail the kinds
of principles and methods of adversarial attacks that can be made to deep learning models dealing with medical images, the problems that can arise, and the preventive measures that can be
taken against them. |
---|---|
ISSN: | 1738-2637 2288-2928 |