Explaining Machine Learning Predictions of Decision Support Systems in Healthcare

Artificial Intelligence (AI) methods, which are often based on Machine Learning (ML) algorithms, are also applied in the healthcare domain to provide predictions to physicians and patients based on electronic health records (EHRs), such as history of laboratory values, applied procedures and diagnos...

Full description

Bibliographic Details
Main Authors: Polat Erdeniz Seda, Veeranki Sai, Schrempf Michael, Jauk Stefanie, Ngoc Trang Tran Thi, Felfernig Alexander, Kramer Diether, Leodolter Werner
Format: Article
Language:English
Published: De Gruyter 2022-09-01
Series:Current Directions in Biomedical Engineering
Subjects:
Online Access:https://doi.org/10.1515/cdbme-2022-1031
_version_ 1828058982848135168
author Polat Erdeniz Seda
Veeranki Sai
Schrempf Michael
Jauk Stefanie
Ngoc Trang Tran Thi
Felfernig Alexander
Kramer Diether
Leodolter Werner
author_facet Polat Erdeniz Seda
Veeranki Sai
Schrempf Michael
Jauk Stefanie
Ngoc Trang Tran Thi
Felfernig Alexander
Kramer Diether
Leodolter Werner
author_sort Polat Erdeniz Seda
collection DOAJ
description Artificial Intelligence (AI) methods, which are often based on Machine Learning (ML) algorithms, are also applied in the healthcare domain to provide predictions to physicians and patients based on electronic health records (EHRs), such as history of laboratory values, applied procedures and diagnoses. The question about these predictions “Why Should I Trust You?” encapsulates the issue with ML black boxes. Therefore, explaining the reasons for these ML predictions to physicians and patients is crucial to allow them to decide whether the prediction is applicable or not. In this paper, we explained and evaluated two prediction explanation methods for healthcare professionals (physicians and nurses). We compared two model-agnostic explanation methods based on global feature importance and local feature importance. We evaluated the user trust and reliance (UTR) for the explanation results of each method in a user study based on real patients’ electronic health records (EHR) and the feedback of healthcare professionals. Based on the user study, we observed that both methods have strengths and weaknesses according to the patients’ data, especially based on the data size of the patient. When the amount of data is small, global feature importance is enough to use. However, when the patient’s data size is big, using a local feature importance method makes more sense. As future work, we will develop a hybrid explanation method (by combining these methods automatically with a smart setting) to obtain higher and more stable performance results in terms of user trust and reliance.
first_indexed 2024-04-10T21:33:52Z
format Article
id doaj.art-93b5c2fa65784b2da3a3617afb4eb1de
institution Directory Open Access Journal
issn 2364-5504
language English
last_indexed 2024-04-10T21:33:52Z
publishDate 2022-09-01
publisher De Gruyter
record_format Article
series Current Directions in Biomedical Engineering
spelling doaj.art-93b5c2fa65784b2da3a3617afb4eb1de2023-01-19T12:47:02ZengDe GruyterCurrent Directions in Biomedical Engineering2364-55042022-09-018211712010.1515/cdbme-2022-1031Explaining Machine Learning Predictions of Decision Support Systems in HealthcarePolat Erdeniz Seda0Veeranki Sai1Schrempf Michael2Jauk Stefanie3Ngoc Trang Tran Thi4Felfernig Alexander5Kramer Diether6Leodolter Werner7Styrian Hospitals Limited Liability Company (Die Steiermarkische Krankenanstaltengesellschaft m. b. H. - KAGes), Billrothgasse 18A,Graz, AustriaStyrian Hospitals Limited Liability Company (Die Steiermarkische Krankenanstaltengesellschaft m. b. H. - KAGes), Billrothgasse 18A,Graz, AustriaStyrian Hospitals Limited Liability Company (Die Steiermarkische Krankenanstaltengesellschaft m. b. H. - KAGes), Billrothgasse 18A,Graz, AustriaStyrian Hospitals Limited Liability Company (Die Steiermarkische Krankenanstaltengesellschaft m. b. H. - KAGes), Billrothgasse 18A,Graz, AustriaGraz University of Technology, Inffeldgasse 16B/2,Graz, AustriaGraz University of Technology, Inffeldgasse 16B/2,Graz, AustriaStyrian Hospitals Limited Liability Company (Die Steiermarkische Krankenanstaltengesellschaft m. b. H. - KAGes), Billrothgasse 18A,Graz, AustriaStyrian Hospitals Limited Liability Company (Die Steiermarkische Krankenanstaltengesellschaft m. b. H. - KAGes), Billrothgasse 18A,Graz, AustriaArtificial Intelligence (AI) methods, which are often based on Machine Learning (ML) algorithms, are also applied in the healthcare domain to provide predictions to physicians and patients based on electronic health records (EHRs), such as history of laboratory values, applied procedures and diagnoses. The question about these predictions “Why Should I Trust You?” encapsulates the issue with ML black boxes. Therefore, explaining the reasons for these ML predictions to physicians and patients is crucial to allow them to decide whether the prediction is applicable or not. In this paper, we explained and evaluated two prediction explanation methods for healthcare professionals (physicians and nurses). We compared two model-agnostic explanation methods based on global feature importance and local feature importance. We evaluated the user trust and reliance (UTR) for the explanation results of each method in a user study based on real patients’ electronic health records (EHR) and the feedback of healthcare professionals. Based on the user study, we observed that both methods have strengths and weaknesses according to the patients’ data, especially based on the data size of the patient. When the amount of data is small, global feature importance is enough to use. However, when the patient’s data size is big, using a local feature importance method makes more sense. As future work, we will develop a hybrid explanation method (by combining these methods automatically with a smart setting) to obtain higher and more stable performance results in terms of user trust and reliance.https://doi.org/10.1515/cdbme-2022-1031artificial intelligenceexplainable aidecision support systemshealthcare
spellingShingle Polat Erdeniz Seda
Veeranki Sai
Schrempf Michael
Jauk Stefanie
Ngoc Trang Tran Thi
Felfernig Alexander
Kramer Diether
Leodolter Werner
Explaining Machine Learning Predictions of Decision Support Systems in Healthcare
Current Directions in Biomedical Engineering
artificial intelligence
explainable ai
decision support systems
healthcare
title Explaining Machine Learning Predictions of Decision Support Systems in Healthcare
title_full Explaining Machine Learning Predictions of Decision Support Systems in Healthcare
title_fullStr Explaining Machine Learning Predictions of Decision Support Systems in Healthcare
title_full_unstemmed Explaining Machine Learning Predictions of Decision Support Systems in Healthcare
title_short Explaining Machine Learning Predictions of Decision Support Systems in Healthcare
title_sort explaining machine learning predictions of decision support systems in healthcare
topic artificial intelligence
explainable ai
decision support systems
healthcare
url https://doi.org/10.1515/cdbme-2022-1031
work_keys_str_mv AT polaterdenizseda explainingmachinelearningpredictionsofdecisionsupportsystemsinhealthcare
AT veerankisai explainingmachinelearningpredictionsofdecisionsupportsystemsinhealthcare
AT schrempfmichael explainingmachinelearningpredictionsofdecisionsupportsystemsinhealthcare
AT jaukstefanie explainingmachinelearningpredictionsofdecisionsupportsystemsinhealthcare
AT ngoctrangtranthi explainingmachinelearningpredictionsofdecisionsupportsystemsinhealthcare
AT felfernigalexander explainingmachinelearningpredictionsofdecisionsupportsystemsinhealthcare
AT kramerdiether explainingmachinelearningpredictionsofdecisionsupportsystemsinhealthcare
AT leodolterwerner explainingmachinelearningpredictionsofdecisionsupportsystemsinhealthcare