Explainable Vision Transformers and Radiomics for COVID-19 Detection in Chest X-rays

The rapid spread of COVID-19 across the globe since its emergence has pushed many countries’ healthcare systems to the verge of collapse. To restrict the spread of the disease and lessen the ongoing cost on the healthcare system, it is critical to appropriately identify COVID-19-positive individuals...

Full description

Bibliographic Details
Main Authors: Mohamed Chetoui, Moulay A. Akhloufi
Format: Article
Language:English
Published: MDPI AG 2022-05-01
Series:Journal of Clinical Medicine
Subjects:
Online Access:https://www.mdpi.com/2077-0383/11/11/3013
_version_ 1797492984500977664
author Mohamed Chetoui
Moulay A. Akhloufi
author_facet Mohamed Chetoui
Moulay A. Akhloufi
author_sort Mohamed Chetoui
collection DOAJ
description The rapid spread of COVID-19 across the globe since its emergence has pushed many countries’ healthcare systems to the verge of collapse. To restrict the spread of the disease and lessen the ongoing cost on the healthcare system, it is critical to appropriately identify COVID-19-positive individuals and isolate them as soon as possible. The primary COVID-19 screening test, RT-PCR, although accurate and reliable, has a long turn-around time. More recently, various researchers have demonstrated the use of deep learning approaches on chest X-ray (CXR) for COVID-19 detection. However, existing Deep Convolutional Neural Network (CNN) methods fail to capture the global context due to their inherent image-specific inductive bias. In this article, we investigated the use of vision transformers (ViT) for detecting COVID-19 in Chest X-ray (CXR) images. Several ViT models were fine-tuned for the multiclass classification problem (COVID-19, Pneumonia and Normal cases). A dataset consisting of 7598 COVID-19 CXR images, 8552 CXR for healthy patients and 5674 for Pneumonia CXR were used. The obtained results achieved high performance with an Area Under Curve (AUC) of 0.99 for multi-class classification (COVID-19 vs. Other Pneumonia vs. normal). The sensitivity of the COVID-19 class achieved 0.99. We demonstrated that the obtained results outperformed comparable state-of-the-art models for detecting COVID-19 on CXR images using CNN architectures. The attention map for the proposed model showed that our model is able to efficiently identify the signs of COVID-19.
first_indexed 2024-03-10T01:12:32Z
format Article
id doaj.art-bec84dc38b74430c9808492e0d14c3da
institution Directory Open Access Journal
issn 2077-0383
language English
last_indexed 2024-03-10T01:12:32Z
publishDate 2022-05-01
publisher MDPI AG
record_format Article
series Journal of Clinical Medicine
spelling doaj.art-bec84dc38b74430c9808492e0d14c3da2023-11-23T14:15:17ZengMDPI AGJournal of Clinical Medicine2077-03832022-05-011111301310.3390/jcm11113013Explainable Vision Transformers and Radiomics for COVID-19 Detection in Chest X-raysMohamed Chetoui0Moulay A. Akhloufi1Perception, Robotics, and Intelligent Machines Research Group (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1A 3E9, CanadaPerception, Robotics, and Intelligent Machines Research Group (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1A 3E9, CanadaThe rapid spread of COVID-19 across the globe since its emergence has pushed many countries’ healthcare systems to the verge of collapse. To restrict the spread of the disease and lessen the ongoing cost on the healthcare system, it is critical to appropriately identify COVID-19-positive individuals and isolate them as soon as possible. The primary COVID-19 screening test, RT-PCR, although accurate and reliable, has a long turn-around time. More recently, various researchers have demonstrated the use of deep learning approaches on chest X-ray (CXR) for COVID-19 detection. However, existing Deep Convolutional Neural Network (CNN) methods fail to capture the global context due to their inherent image-specific inductive bias. In this article, we investigated the use of vision transformers (ViT) for detecting COVID-19 in Chest X-ray (CXR) images. Several ViT models were fine-tuned for the multiclass classification problem (COVID-19, Pneumonia and Normal cases). A dataset consisting of 7598 COVID-19 CXR images, 8552 CXR for healthy patients and 5674 for Pneumonia CXR were used. The obtained results achieved high performance with an Area Under Curve (AUC) of 0.99 for multi-class classification (COVID-19 vs. Other Pneumonia vs. normal). The sensitivity of the COVID-19 class achieved 0.99. We demonstrated that the obtained results outperformed comparable state-of-the-art models for detecting COVID-19 on CXR images using CNN architectures. The attention map for the proposed model showed that our model is able to efficiently identify the signs of COVID-19.https://www.mdpi.com/2077-0383/11/11/3013vision transformersCOVID-19chest X-raypneumoniaradiology
spellingShingle Mohamed Chetoui
Moulay A. Akhloufi
Explainable Vision Transformers and Radiomics for COVID-19 Detection in Chest X-rays
Journal of Clinical Medicine
vision transformers
COVID-19
chest X-ray
pneumonia
radiology
title Explainable Vision Transformers and Radiomics for COVID-19 Detection in Chest X-rays
title_full Explainable Vision Transformers and Radiomics for COVID-19 Detection in Chest X-rays
title_fullStr Explainable Vision Transformers and Radiomics for COVID-19 Detection in Chest X-rays
title_full_unstemmed Explainable Vision Transformers and Radiomics for COVID-19 Detection in Chest X-rays
title_short Explainable Vision Transformers and Radiomics for COVID-19 Detection in Chest X-rays
title_sort explainable vision transformers and radiomics for covid 19 detection in chest x rays
topic vision transformers
COVID-19
chest X-ray
pneumonia
radiology
url https://www.mdpi.com/2077-0383/11/11/3013
work_keys_str_mv AT mohamedchetoui explainablevisiontransformersandradiomicsforcovid19detectioninchestxrays
AT moulayaakhloufi explainablevisiontransformersandradiomicsforcovid19detectioninchestxrays