Interpretable Deep Learning for Neuroimaging-Based Diagnostic Classification
Deep neural networks (DNN) are increasingly being used in neuroimaging research for the diagnosis of brain disorders and understanding of human brain. Despite their impressive performance, their usage in medical applications will be limited unless there is more transparency on how these algorithms a...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2024-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10499826/ |
_version_ | 1797193534591205376 |
---|---|
author | Gopikrishna Deshpande Janzaib Masood Nguyen Huynh Thomas S. Denney Michael N. Dretsch |
author_facet | Gopikrishna Deshpande Janzaib Masood Nguyen Huynh Thomas S. Denney Michael N. Dretsch |
author_sort | Gopikrishna Deshpande |
collection | DOAJ |
description | Deep neural networks (DNN) are increasingly being used in neuroimaging research for the diagnosis of brain disorders and understanding of human brain. Despite their impressive performance, their usage in medical applications will be limited unless there is more transparency on how these algorithms arrive at their decisions. We address this issue in the current report. A DNN classifier was trained to discriminate between healthy subjects and those with posttraumatic stress disorder (PTSD) using brain connectivity obtained from functional magnetic resonance imaging data. The classifier provided 90% accuracy. Brain connectivity features important for classification were generated for a pool of test subjects and permutation testing was used to identify significantly discriminative connections. Such heatmaps of significant paths were generated from 10 different interpretability algorithms based on variants of layer-wise relevance and gradient attribution methods. Since different interpretability algorithms make different assumptions about the data and model, their explanations had both commonalities and differences. Therefore, we developed a consensus across interpretability methods, which aligned well with the existing knowledge about brain alterations underlying PTSD. The confident identification of more than 20 regions, acknowledged for their relevance to PTSD in prior studies, was achieved with a voting score exceeding 8 and a family-wise correction threshold below 0.05. Our work illustrates how robustness and physiological plausibility of explanations can be achieved in interpreting classifications obtained from DNNs in diagnostic neuroimaging applications by evaluating convergence across methods. This will be crucial for trust in AI-based medical diagnostics in the future. |
first_indexed | 2024-04-24T05:41:55Z |
format | Article |
id | doaj.art-ba057c27d5754a32983cdbdb168542f3 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-04-24T05:41:55Z |
publishDate | 2024-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-ba057c27d5754a32983cdbdb168542f32024-04-23T23:00:24ZengIEEEIEEE Access2169-35362024-01-0112554745549010.1109/ACCESS.2024.338891110499826Interpretable Deep Learning for Neuroimaging-Based Diagnostic ClassificationGopikrishna Deshpande0https://orcid.org/0000-0001-7471-5357Janzaib Masood1https://orcid.org/0009-0003-7178-4391Nguyen Huynh2https://orcid.org/0000-0003-2337-4470Thomas S. Denney3https://orcid.org/0000-0002-6695-4777Michael N. Dretsch4Auburn University Neuroimaging Center, Department of Electrical and Computer Engineering, Auburn University, Auburn, AL, USAAuburn University Neuroimaging Center, Department of Electrical and Computer Engineering, Auburn University, Auburn, AL, USAAuburn University Neuroimaging Center, Department of Electrical and Computer Engineering, Auburn University, Auburn, AL, USAAuburn University Neuroimaging Center, Department of Electrical and Computer Engineering, Auburn University, Auburn, AL, USAWalter Reed Army Institute of Research-West, Joint Base Lewis-McChord, WA, USADeep neural networks (DNN) are increasingly being used in neuroimaging research for the diagnosis of brain disorders and understanding of human brain. Despite their impressive performance, their usage in medical applications will be limited unless there is more transparency on how these algorithms arrive at their decisions. We address this issue in the current report. A DNN classifier was trained to discriminate between healthy subjects and those with posttraumatic stress disorder (PTSD) using brain connectivity obtained from functional magnetic resonance imaging data. The classifier provided 90% accuracy. Brain connectivity features important for classification were generated for a pool of test subjects and permutation testing was used to identify significantly discriminative connections. Such heatmaps of significant paths were generated from 10 different interpretability algorithms based on variants of layer-wise relevance and gradient attribution methods. Since different interpretability algorithms make different assumptions about the data and model, their explanations had both commonalities and differences. Therefore, we developed a consensus across interpretability methods, which aligned well with the existing knowledge about brain alterations underlying PTSD. The confident identification of more than 20 regions, acknowledged for their relevance to PTSD in prior studies, was achieved with a voting score exceeding 8 and a family-wise correction threshold below 0.05. Our work illustrates how robustness and physiological plausibility of explanations can be achieved in interpreting classifications obtained from DNNs in diagnostic neuroimaging applications by evaluating convergence across methods. This will be crucial for trust in AI-based medical diagnostics in the future.https://ieeexplore.ieee.org/document/10499826/Resting-state functional magnetic resonanceresting-state functional connectivityinterpretable deep learning |
spellingShingle | Gopikrishna Deshpande Janzaib Masood Nguyen Huynh Thomas S. Denney Michael N. Dretsch Interpretable Deep Learning for Neuroimaging-Based Diagnostic Classification IEEE Access Resting-state functional magnetic resonance resting-state functional connectivity interpretable deep learning |
title | Interpretable Deep Learning for Neuroimaging-Based Diagnostic Classification |
title_full | Interpretable Deep Learning for Neuroimaging-Based Diagnostic Classification |
title_fullStr | Interpretable Deep Learning for Neuroimaging-Based Diagnostic Classification |
title_full_unstemmed | Interpretable Deep Learning for Neuroimaging-Based Diagnostic Classification |
title_short | Interpretable Deep Learning for Neuroimaging-Based Diagnostic Classification |
title_sort | interpretable deep learning for neuroimaging based diagnostic classification |
topic | Resting-state functional magnetic resonance resting-state functional connectivity interpretable deep learning |
url | https://ieeexplore.ieee.org/document/10499826/ |
work_keys_str_mv | AT gopikrishnadeshpande interpretabledeeplearningforneuroimagingbaseddiagnosticclassification AT janzaibmasood interpretabledeeplearningforneuroimagingbaseddiagnosticclassification AT nguyenhuynh interpretabledeeplearningforneuroimagingbaseddiagnosticclassification AT thomassdenney interpretabledeeplearningforneuroimagingbaseddiagnosticclassification AT michaelndretsch interpretabledeeplearningforneuroimagingbaseddiagnosticclassification |