Multimodal graph attention network for COVID-19 outcome prediction

Abstract When dealing with a newly emerging disease such as COVID-19, the impact of patient- and disease-specific factors (e.g., body weight or known co-morbidities) on the immediate course of the disease is largely unknown. An accurate prediction of the most likely individual disease progression ca...

Full description

Bibliographic Details
Main Authors: Matthias Keicher, Hendrik Burwinkel, David Bani-Harouni, Magdalini Paschali, Tobias Czempiel, Egon Burian, Marcus R. Makowski, Rickmer Braren, Nassir Navab, Thomas Wendler
Format: Article
Language:English
Published: Nature Portfolio 2023-11-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-023-46625-8
_version_ 1797630223825502208
author Matthias Keicher
Hendrik Burwinkel
David Bani-Harouni
Magdalini Paschali
Tobias Czempiel
Egon Burian
Marcus R. Makowski
Rickmer Braren
Nassir Navab
Thomas Wendler
author_facet Matthias Keicher
Hendrik Burwinkel
David Bani-Harouni
Magdalini Paschali
Tobias Czempiel
Egon Burian
Marcus R. Makowski
Rickmer Braren
Nassir Navab
Thomas Wendler
author_sort Matthias Keicher
collection DOAJ
description Abstract When dealing with a newly emerging disease such as COVID-19, the impact of patient- and disease-specific factors (e.g., body weight or known co-morbidities) on the immediate course of the disease is largely unknown. An accurate prediction of the most likely individual disease progression can improve the planning of limited resources and finding the optimal treatment for patients. In the case of COVID-19, the need for intensive care unit (ICU) admission of pneumonia patients can often only be determined on short notice by acute indicators such as vital signs (e.g., breathing rate, blood oxygen levels), whereas statistical analysis and decision support systems that integrate all of the available data could enable an earlier prognosis. To this end, we propose a holistic, multimodal graph-based approach combining imaging and non-imaging information. Specifically, we introduce a multimodal similarity metric to build a population graph that shows a clustering of patients. For each patient in the graph, we extract radiomic features from a segmentation network that also serves as a latent image feature encoder. Together with clinical patient data like vital signs, demographics, and lab results, these modalities are combined into a multimodal representation of each patient. This feature extraction is trained end-to-end with an image-based Graph Attention Network to process the population graph and predict the COVID-19 patient outcomes: admission to ICU, need for ventilation, and mortality. To combine multiple modalities, radiomic features are extracted from chest CTs using a segmentation neural network. Results on a dataset collected in Klinikum rechts der Isar in Munich, Germany and the publicly available iCTCF dataset show that our approach outperforms single modality and non-graph baselines. Moreover, our clustering and graph attention increases understanding of the patient relationships within the population graph and provides insight into the network’s decision-making process.
first_indexed 2024-03-11T11:04:10Z
format Article
id doaj.art-38e0e3b15c7049d4a2d8c23892dc616a
institution Directory Open Access Journal
issn 2045-2322
language English
last_indexed 2024-03-11T11:04:10Z
publishDate 2023-11-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj.art-38e0e3b15c7049d4a2d8c23892dc616a2023-11-12T12:17:30ZengNature PortfolioScientific Reports2045-23222023-11-0113111410.1038/s41598-023-46625-8Multimodal graph attention network for COVID-19 outcome predictionMatthias Keicher0Hendrik Burwinkel1David Bani-Harouni2Magdalini Paschali3Tobias Czempiel4Egon Burian5Marcus R. Makowski6Rickmer Braren7Nassir Navab8Thomas Wendler9Computer Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of MunichComputer Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of MunichComputer Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of MunichComputer Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of MunichComputer Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of MunichDepartment of Diagnostic and Interventional Radiology, School of Medicine, Technical University of MunichDepartment of Diagnostic and Interventional Radiology, School of Medicine, Technical University of MunichDepartment of Diagnostic and Interventional Radiology, School of Medicine, Technical University of MunichComputer Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of MunichComputer Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of MunichAbstract When dealing with a newly emerging disease such as COVID-19, the impact of patient- and disease-specific factors (e.g., body weight or known co-morbidities) on the immediate course of the disease is largely unknown. An accurate prediction of the most likely individual disease progression can improve the planning of limited resources and finding the optimal treatment for patients. In the case of COVID-19, the need for intensive care unit (ICU) admission of pneumonia patients can often only be determined on short notice by acute indicators such as vital signs (e.g., breathing rate, blood oxygen levels), whereas statistical analysis and decision support systems that integrate all of the available data could enable an earlier prognosis. To this end, we propose a holistic, multimodal graph-based approach combining imaging and non-imaging information. Specifically, we introduce a multimodal similarity metric to build a population graph that shows a clustering of patients. For each patient in the graph, we extract radiomic features from a segmentation network that also serves as a latent image feature encoder. Together with clinical patient data like vital signs, demographics, and lab results, these modalities are combined into a multimodal representation of each patient. This feature extraction is trained end-to-end with an image-based Graph Attention Network to process the population graph and predict the COVID-19 patient outcomes: admission to ICU, need for ventilation, and mortality. To combine multiple modalities, radiomic features are extracted from chest CTs using a segmentation neural network. Results on a dataset collected in Klinikum rechts der Isar in Munich, Germany and the publicly available iCTCF dataset show that our approach outperforms single modality and non-graph baselines. Moreover, our clustering and graph attention increases understanding of the patient relationships within the population graph and provides insight into the network’s decision-making process.https://doi.org/10.1038/s41598-023-46625-8
spellingShingle Matthias Keicher
Hendrik Burwinkel
David Bani-Harouni
Magdalini Paschali
Tobias Czempiel
Egon Burian
Marcus R. Makowski
Rickmer Braren
Nassir Navab
Thomas Wendler
Multimodal graph attention network for COVID-19 outcome prediction
Scientific Reports
title Multimodal graph attention network for COVID-19 outcome prediction
title_full Multimodal graph attention network for COVID-19 outcome prediction
title_fullStr Multimodal graph attention network for COVID-19 outcome prediction
title_full_unstemmed Multimodal graph attention network for COVID-19 outcome prediction
title_short Multimodal graph attention network for COVID-19 outcome prediction
title_sort multimodal graph attention network for covid 19 outcome prediction
url https://doi.org/10.1038/s41598-023-46625-8
work_keys_str_mv AT matthiaskeicher multimodalgraphattentionnetworkforcovid19outcomeprediction
AT hendrikburwinkel multimodalgraphattentionnetworkforcovid19outcomeprediction
AT davidbaniharouni multimodalgraphattentionnetworkforcovid19outcomeprediction
AT magdalinipaschali multimodalgraphattentionnetworkforcovid19outcomeprediction
AT tobiasczempiel multimodalgraphattentionnetworkforcovid19outcomeprediction
AT egonburian multimodalgraphattentionnetworkforcovid19outcomeprediction
AT marcusrmakowski multimodalgraphattentionnetworkforcovid19outcomeprediction
AT rickmerbraren multimodalgraphattentionnetworkforcovid19outcomeprediction
AT nassirnavab multimodalgraphattentionnetworkforcovid19outcomeprediction
AT thomaswendler multimodalgraphattentionnetworkforcovid19outcomeprediction