Explaining graph convolutional network predictions for clinicians—An explainable AI approach to Alzheimer's disease classification
IntroductionGraph-based representations are becoming more common in the medical domain, where each node defines a patient, and the edges signify associations between patients, relating individuals with disease and symptoms in a node classification task. In this study, a Graph Convolutional Networks...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2024-01-01
|
Series: | Frontiers in Artificial Intelligence |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/frai.2023.1334613/full |
_version_ | 1797362321486512128 |
---|---|
author | Sule Tekkesinoglu Sara Pudas Sara Pudas |
author_facet | Sule Tekkesinoglu Sara Pudas Sara Pudas |
author_sort | Sule Tekkesinoglu |
collection | DOAJ |
description | IntroductionGraph-based representations are becoming more common in the medical domain, where each node defines a patient, and the edges signify associations between patients, relating individuals with disease and symptoms in a node classification task. In this study, a Graph Convolutional Networks (GCN) model was utilized to capture differences in neurocognitive, genetic, and brain atrophy patterns that can predict cognitive status, ranging from Normal Cognition (NC) to Mild Cognitive Impairment (MCI) and Alzheimer's Disease (AD), on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Elucidating model predictions is vital in medical applications to promote clinical adoption and establish physician trust. Therefore, we introduce a decomposition-based explanation method for individual patient classification.MethodsOur method involves analyzing the output variations resulting from decomposing input values, which allows us to determine the degree of impact on the prediction. Through this process, we gain insight into how each feature from various modalities, both at the individual and group levels, contributes to the diagnostic result. Given that graph data contains critical information in edges, we studied relational data by silencing all the edges of a particular class, thereby obtaining explanations at the neighborhood level.ResultsOur functional evaluation showed that the explanations remain stable with minor changes in input values, specifically for edge weights exceeding 0.80. Additionally, our comparative analysis against SHAP values yielded comparable results with significantly reduced computational time. To further validate the model's explanations, we conducted a survey study with 11 domain experts. The majority (71%) of the responses confirmed the correctness of the explanations, with a rating of above six on a 10-point scale for the understandability of the explanations.DiscussionStrategies to overcome perceived limitations, such as the GCN's overreliance on demographic information, were discussed to facilitate future adoption into clinical practice and gain clinicians' trust as a diagnostic decision support system. |
first_indexed | 2024-03-08T16:05:25Z |
format | Article |
id | doaj.art-54850e908f314570a690b9f85e981005 |
institution | Directory Open Access Journal |
issn | 2624-8212 |
language | English |
last_indexed | 2024-03-08T16:05:25Z |
publishDate | 2024-01-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Artificial Intelligence |
spelling | doaj.art-54850e908f314570a690b9f85e9810052024-01-08T06:11:53ZengFrontiers Media S.A.Frontiers in Artificial Intelligence2624-82122024-01-01610.3389/frai.2023.13346131334613Explaining graph convolutional network predictions for clinicians—An explainable AI approach to Alzheimer's disease classificationSule Tekkesinoglu0Sara Pudas1Sara Pudas2Department of Computing Science, Umeå University, Umeå, SwedenDepartment of Integrative Medical Biology (IMB), Umeå University, Umeå, SwedenUmeå Center for Functional Brain Imaging, Umeå University, Umeå, SwedenIntroductionGraph-based representations are becoming more common in the medical domain, where each node defines a patient, and the edges signify associations between patients, relating individuals with disease and symptoms in a node classification task. In this study, a Graph Convolutional Networks (GCN) model was utilized to capture differences in neurocognitive, genetic, and brain atrophy patterns that can predict cognitive status, ranging from Normal Cognition (NC) to Mild Cognitive Impairment (MCI) and Alzheimer's Disease (AD), on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Elucidating model predictions is vital in medical applications to promote clinical adoption and establish physician trust. Therefore, we introduce a decomposition-based explanation method for individual patient classification.MethodsOur method involves analyzing the output variations resulting from decomposing input values, which allows us to determine the degree of impact on the prediction. Through this process, we gain insight into how each feature from various modalities, both at the individual and group levels, contributes to the diagnostic result. Given that graph data contains critical information in edges, we studied relational data by silencing all the edges of a particular class, thereby obtaining explanations at the neighborhood level.ResultsOur functional evaluation showed that the explanations remain stable with minor changes in input values, specifically for edge weights exceeding 0.80. Additionally, our comparative analysis against SHAP values yielded comparable results with significantly reduced computational time. To further validate the model's explanations, we conducted a survey study with 11 domain experts. The majority (71%) of the responses confirmed the correctness of the explanations, with a rating of above six on a 10-point scale for the understandability of the explanations.DiscussionStrategies to overcome perceived limitations, such as the GCN's overreliance on demographic information, were discussed to facilitate future adoption into clinical practice and gain clinicians' trust as a diagnostic decision support system.https://www.frontiersin.org/articles/10.3389/frai.2023.1334613/fullexplainable AImultimodal datagraph convolutional networksAlzheimer's diseasenode classification |
spellingShingle | Sule Tekkesinoglu Sara Pudas Sara Pudas Explaining graph convolutional network predictions for clinicians—An explainable AI approach to Alzheimer's disease classification Frontiers in Artificial Intelligence explainable AI multimodal data graph convolutional networks Alzheimer's disease node classification |
title | Explaining graph convolutional network predictions for clinicians—An explainable AI approach to Alzheimer's disease classification |
title_full | Explaining graph convolutional network predictions for clinicians—An explainable AI approach to Alzheimer's disease classification |
title_fullStr | Explaining graph convolutional network predictions for clinicians—An explainable AI approach to Alzheimer's disease classification |
title_full_unstemmed | Explaining graph convolutional network predictions for clinicians—An explainable AI approach to Alzheimer's disease classification |
title_short | Explaining graph convolutional network predictions for clinicians—An explainable AI approach to Alzheimer's disease classification |
title_sort | explaining graph convolutional network predictions for clinicians an explainable ai approach to alzheimer s disease classification |
topic | explainable AI multimodal data graph convolutional networks Alzheimer's disease node classification |
url | https://www.frontiersin.org/articles/10.3389/frai.2023.1334613/full |
work_keys_str_mv | AT suletekkesinoglu explaininggraphconvolutionalnetworkpredictionsforcliniciansanexplainableaiapproachtoalzheimersdiseaseclassification AT sarapudas explaininggraphconvolutionalnetworkpredictionsforcliniciansanexplainableaiapproachtoalzheimersdiseaseclassification AT sarapudas explaininggraphconvolutionalnetworkpredictionsforcliniciansanexplainableaiapproachtoalzheimersdiseaseclassification |