Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images

The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to t...

Full description

Bibliographic Details
Main Authors: Biprodip Pal, Debashis Gupta, Md. Rashed-Al-Mahfuz, Salem A. Alyami, Mohammad Ali Moni
Format: Article
Language:English
Published: MDPI AG 2021-05-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/11/9/4233
_version_ 1797535031092051968
author Biprodip Pal
Debashis Gupta
Md. Rashed-Al-Mahfuz
Salem A. Alyami
Mohammad Ali Moni
author_facet Biprodip Pal
Debashis Gupta
Md. Rashed-Al-Mahfuz
Salem A. Alyami
Mohammad Ali Moni
author_sort Biprodip Pal
collection DOAJ
description The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in chest radiography image classification, cost efficiency, lack of expert radiologists, and the need for faster processing in the pandemic area. Most of the promising algorithms proposed in different studies are based on pre-trained deep learning models. Such open-source models and lack of variation in the radiology image-capturing environment make the diagnosis system vulnerable to adversarial attacks such as fast gradient sign method (FGSM) attack. This study therefore explored the potential vulnerability of pre-trained convolutional neural network algorithms to the FGSM attack in terms of two frequently used models, VGG16 and Inception-v3. Firstly, we developed two transfer learning models for X-ray and CT image-based COVID-19 classification and analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, our study illustrates that misclassification can occur with a very minor perturbation magnitude, such as 0.009 and 0.003 for the FGSM attack in these models for X-ray and CT images, respectively, without any effect on the visual perceptibility of the perturbation. In addition, we demonstrated that successful FGSM attack can decrease the classification performance to 16.67% and 55.56% for X-ray images, as well as 36% and 40% in the case of CT images for VGG16 and Inception-v3, respectively, without any human-recognizable perturbation effects in the adversarial images. Finally, we analyzed that correct class probability of any test image which is supposed to be 1, can drop for both considered models and with increased perturbation; it can drop to 0.24 and 0.17 for the VGG16 model in cases of X-ray and CT images, respectively. Thus, despite the need for data sharing and automated diagnosis, practical deployment of such program requires more robustness.
first_indexed 2024-03-10T11:38:49Z
format Article
id doaj.art-ae76f347dbd14d308ec66be19b985697
institution Directory Open Access Journal
issn 2076-3417
language English
last_indexed 2024-03-10T11:38:49Z
publishDate 2021-05-01
publisher MDPI AG
record_format Article
series Applied Sciences
spelling doaj.art-ae76f347dbd14d308ec66be19b9856972023-11-21T18:38:40ZengMDPI AGApplied Sciences2076-34172021-05-01119423310.3390/app11094233Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography ImagesBiprodip Pal0Debashis Gupta1Md. Rashed-Al-Mahfuz2Salem A. Alyami3Mohammad Ali Moni4Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, BangladeshDepartment of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, BangladeshDepartment of Computer Science and Engineering, University of Rajshahi, Rajshahi 6205, BangladeshDepartment of Mathematics and Statistics, Imam Mohammad Ibn Saud Islamic University, Riyadh 13318, Saudi ArabiaWHO Collaborating Centre on eHealth, School of Public Health and Community Medicine UNSW Sydney, Sydney, NSW 2052, AustraliaThe COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in chest radiography image classification, cost efficiency, lack of expert radiologists, and the need for faster processing in the pandemic area. Most of the promising algorithms proposed in different studies are based on pre-trained deep learning models. Such open-source models and lack of variation in the radiology image-capturing environment make the diagnosis system vulnerable to adversarial attacks such as fast gradient sign method (FGSM) attack. This study therefore explored the potential vulnerability of pre-trained convolutional neural network algorithms to the FGSM attack in terms of two frequently used models, VGG16 and Inception-v3. Firstly, we developed two transfer learning models for X-ray and CT image-based COVID-19 classification and analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, our study illustrates that misclassification can occur with a very minor perturbation magnitude, such as 0.009 and 0.003 for the FGSM attack in these models for X-ray and CT images, respectively, without any effect on the visual perceptibility of the perturbation. In addition, we demonstrated that successful FGSM attack can decrease the classification performance to 16.67% and 55.56% for X-ray images, as well as 36% and 40% in the case of CT images for VGG16 and Inception-v3, respectively, without any human-recognizable perturbation effects in the adversarial images. Finally, we analyzed that correct class probability of any test image which is supposed to be 1, can drop for both considered models and with increased perturbation; it can drop to 0.24 and 0.17 for the VGG16 model in cases of X-ray and CT images, respectively. Thus, despite the need for data sharing and automated diagnosis, practical deployment of such program requires more robustness.https://www.mdpi.com/2076-3417/11/9/4233COVID-19deep learningadversarial attackFGSM attackradiology images
spellingShingle Biprodip Pal
Debashis Gupta
Md. Rashed-Al-Mahfuz
Salem A. Alyami
Mohammad Ali Moni
Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images
Applied Sciences
COVID-19
deep learning
adversarial attack
FGSM attack
radiology images
title Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images
title_full Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images
title_fullStr Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images
title_full_unstemmed Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images
title_short Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images
title_sort vulnerability in deep transfer learning models to adversarial fast gradient sign attack for covid 19 prediction from chest radiography images
topic COVID-19
deep learning
adversarial attack
FGSM attack
radiology images
url https://www.mdpi.com/2076-3417/11/9/4233
work_keys_str_mv AT biprodippal vulnerabilityindeeptransferlearningmodelstoadversarialfastgradientsignattackforcovid19predictionfromchestradiographyimages
AT debashisgupta vulnerabilityindeeptransferlearningmodelstoadversarialfastgradientsignattackforcovid19predictionfromchestradiographyimages
AT mdrashedalmahfuz vulnerabilityindeeptransferlearningmodelstoadversarialfastgradientsignattackforcovid19predictionfromchestradiographyimages
AT salemaalyami vulnerabilityindeeptransferlearningmodelstoadversarialfastgradientsignattackforcovid19predictionfromchestradiographyimages
AT mohammadalimoni vulnerabilityindeeptransferlearningmodelstoadversarialfastgradientsignattackforcovid19predictionfromchestradiographyimages