An interpretable neural network for outcome prediction in traumatic brain injury
Abstract Background Traumatic Brain Injury (TBI) is a common condition with potentially severe long-term complications, the prediction of which remains challenging. Machine learning (ML) methods have been used previously to help physicians predict long-term outcomes of TBI so that appropriate treatm...
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
BMC
2022-08-01
|
Series: | BMC Medical Informatics and Decision Making |
Subjects: | |
Online Access: | https://doi.org/10.1186/s12911-022-01953-z |
_version_ | 1811345247124324352 |
---|---|
author | Cristian Minoccheri Craig A. Williamson Mark Hemmila Kevin Ward Erica B. Stein Jonathan Gryak Kayvan Najarian |
author_facet | Cristian Minoccheri Craig A. Williamson Mark Hemmila Kevin Ward Erica B. Stein Jonathan Gryak Kayvan Najarian |
author_sort | Cristian Minoccheri |
collection | DOAJ |
description | Abstract Background Traumatic Brain Injury (TBI) is a common condition with potentially severe long-term complications, the prediction of which remains challenging. Machine learning (ML) methods have been used previously to help physicians predict long-term outcomes of TBI so that appropriate treatment plans can be adopted. However, many ML techniques are “black box”: it is difficult for humans to understand the decisions made by the model, with post-hoc explanations only identifying isolated relevant factors rather than combinations of factors. Moreover, such models often rely on many variables, some of which might not be available at the time of hospitalization. Methods In this study, we apply an interpretable neural network model based on tropical geometry to predict unfavorable outcomes at six months from hospitalization in TBI patients, based on information available at the time of admission. Results The proposed method is compared to established machine learning methods—XGBoost, Random Forest, and SVM—achieving comparable performance in terms of area under the receiver operating characteristic curve (AUC)—0.799 for the proposed method vs. 0.810 for the best black box model. Moreover, the proposed method allows for the extraction of simple, human-understandable rules that explain the model’s predictions and can be used as general guidelines by clinicians to inform treatment decisions. Conclusions The classification results for the proposed model are comparable with those of traditional ML methods. However, our model is interpretable, and it allows the extraction of intelligible rules. These rules can be used to determine relevant factors in assessing TBI outcomes and can be used in situations when not all necessary factors are known to inform the full model’s decision. |
first_indexed | 2024-04-13T20:00:15Z |
format | Article |
id | doaj.art-462f9efaa45e4c2a9a8e04d53003c169 |
institution | Directory Open Access Journal |
issn | 1472-6947 |
language | English |
last_indexed | 2024-04-13T20:00:15Z |
publishDate | 2022-08-01 |
publisher | BMC |
record_format | Article |
series | BMC Medical Informatics and Decision Making |
spelling | doaj.art-462f9efaa45e4c2a9a8e04d53003c1692022-12-22T02:32:14ZengBMCBMC Medical Informatics and Decision Making1472-69472022-08-012211910.1186/s12911-022-01953-zAn interpretable neural network for outcome prediction in traumatic brain injuryCristian Minoccheri0Craig A. Williamson1Mark Hemmila2Kevin Ward3Erica B. Stein4Jonathan Gryak5Kayvan Najarian6Department of Computational Medicine and Bioinformatics, University of MichiganDepartment of Neurosurgery, University of MichiganMax Harry Weil Institute for Critical Care Research and Innovation, University of MichiganMax Harry Weil Institute for Critical Care Research and Innovation, University of MichiganDepartment of Radiology, University of MichiganDepartment of Computational Medicine and Bioinformatics, University of MichiganDepartment of Computational Medicine and Bioinformatics, University of MichiganAbstract Background Traumatic Brain Injury (TBI) is a common condition with potentially severe long-term complications, the prediction of which remains challenging. Machine learning (ML) methods have been used previously to help physicians predict long-term outcomes of TBI so that appropriate treatment plans can be adopted. However, many ML techniques are “black box”: it is difficult for humans to understand the decisions made by the model, with post-hoc explanations only identifying isolated relevant factors rather than combinations of factors. Moreover, such models often rely on many variables, some of which might not be available at the time of hospitalization. Methods In this study, we apply an interpretable neural network model based on tropical geometry to predict unfavorable outcomes at six months from hospitalization in TBI patients, based on information available at the time of admission. Results The proposed method is compared to established machine learning methods—XGBoost, Random Forest, and SVM—achieving comparable performance in terms of area under the receiver operating characteristic curve (AUC)—0.799 for the proposed method vs. 0.810 for the best black box model. Moreover, the proposed method allows for the extraction of simple, human-understandable rules that explain the model’s predictions and can be used as general guidelines by clinicians to inform treatment decisions. Conclusions The classification results for the proposed model are comparable with those of traditional ML methods. However, our model is interpretable, and it allows the extraction of intelligible rules. These rules can be used to determine relevant factors in assessing TBI outcomes and can be used in situations when not all necessary factors are known to inform the full model’s decision.https://doi.org/10.1186/s12911-022-01953-zTraumatic brain injuryOutcome predictionInterpretable machine learningNeural networksClinical decision support systems |
spellingShingle | Cristian Minoccheri Craig A. Williamson Mark Hemmila Kevin Ward Erica B. Stein Jonathan Gryak Kayvan Najarian An interpretable neural network for outcome prediction in traumatic brain injury BMC Medical Informatics and Decision Making Traumatic brain injury Outcome prediction Interpretable machine learning Neural networks Clinical decision support systems |
title | An interpretable neural network for outcome prediction in traumatic brain injury |
title_full | An interpretable neural network for outcome prediction in traumatic brain injury |
title_fullStr | An interpretable neural network for outcome prediction in traumatic brain injury |
title_full_unstemmed | An interpretable neural network for outcome prediction in traumatic brain injury |
title_short | An interpretable neural network for outcome prediction in traumatic brain injury |
title_sort | interpretable neural network for outcome prediction in traumatic brain injury |
topic | Traumatic brain injury Outcome prediction Interpretable machine learning Neural networks Clinical decision support systems |
url | https://doi.org/10.1186/s12911-022-01953-z |
work_keys_str_mv | AT cristianminoccheri aninterpretableneuralnetworkforoutcomepredictionintraumaticbraininjury AT craigawilliamson aninterpretableneuralnetworkforoutcomepredictionintraumaticbraininjury AT markhemmila aninterpretableneuralnetworkforoutcomepredictionintraumaticbraininjury AT kevinward aninterpretableneuralnetworkforoutcomepredictionintraumaticbraininjury AT ericabstein aninterpretableneuralnetworkforoutcomepredictionintraumaticbraininjury AT jonathangryak aninterpretableneuralnetworkforoutcomepredictionintraumaticbraininjury AT kayvannajarian aninterpretableneuralnetworkforoutcomepredictionintraumaticbraininjury AT cristianminoccheri interpretableneuralnetworkforoutcomepredictionintraumaticbraininjury AT craigawilliamson interpretableneuralnetworkforoutcomepredictionintraumaticbraininjury AT markhemmila interpretableneuralnetworkforoutcomepredictionintraumaticbraininjury AT kevinward interpretableneuralnetworkforoutcomepredictionintraumaticbraininjury AT ericabstein interpretableneuralnetworkforoutcomepredictionintraumaticbraininjury AT jonathangryak interpretableneuralnetworkforoutcomepredictionintraumaticbraininjury AT kayvannajarian interpretableneuralnetworkforoutcomepredictionintraumaticbraininjury |