Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals
Abstract Explainable artificial intelligence (XAI) has emerged as a promising solution for addressing the implementation challenges of AI/ML in healthcare. However, little is known about how developers and clinicians interpret XAI and what conflicting goals and requirements they may have. This paper...
Main Authors: | , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2023-05-01
|
Series: | npj Digital Medicine |
Online Access: | https://doi.org/10.1038/s41746-023-00837-4 |
_version_ | 1797426956252217344 |
---|---|
author | Nadine Bienefeld Jens Michael Boss Rahel Lüthy Dominique Brodbeck Jan Azzati Mirco Blaser Jan Willms Emanuela Keller |
author_facet | Nadine Bienefeld Jens Michael Boss Rahel Lüthy Dominique Brodbeck Jan Azzati Mirco Blaser Jan Willms Emanuela Keller |
author_sort | Nadine Bienefeld |
collection | DOAJ |
description | Abstract Explainable artificial intelligence (XAI) has emerged as a promising solution for addressing the implementation challenges of AI/ML in healthcare. However, little is known about how developers and clinicians interpret XAI and what conflicting goals and requirements they may have. This paper presents the findings of a longitudinal multi-method study involving 112 developers and clinicians co-designing an XAI solution for a clinical decision support system. Our study identifies three key differences between developer and clinician mental models of XAI, including opposing goals (model interpretability vs. clinical plausibility), different sources of truth (data vs. patient), and the role of exploring new vs. exploiting old knowledge. Based on our findings, we propose design solutions that can help address the XAI conundrum in healthcare, including the use of causal inference models, personalized explanations, and ambidexterity between exploration and exploitation mindsets. Our study highlights the importance of considering the perspectives of both developers and clinicians in the design of XAI systems and provides practical recommendations for improving the effectiveness and usability of XAI in healthcare. |
first_indexed | 2024-03-09T08:37:21Z |
format | Article |
id | doaj.art-e2d257a22f134b72b3f5176687ed7ced |
institution | Directory Open Access Journal |
issn | 2398-6352 |
language | English |
last_indexed | 2024-03-09T08:37:21Z |
publishDate | 2023-05-01 |
publisher | Nature Portfolio |
record_format | Article |
series | npj Digital Medicine |
spelling | doaj.art-e2d257a22f134b72b3f5176687ed7ced2023-12-02T17:59:01ZengNature Portfolionpj Digital Medicine2398-63522023-05-01611710.1038/s41746-023-00837-4Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goalsNadine Bienefeld0Jens Michael Boss1Rahel Lüthy2Dominique Brodbeck3Jan Azzati4Mirco Blaser5Jan Willms6Emanuela Keller7Department of Management, Technology, and Economics, ETH ZurichNeurocritical Care Unit, Department of Neurosurgery and Institute of Intensive Care Medicine, Clinical Neuroscience Center, University Hospital Zurich and University of ZurichInstitute for Medical Engineering and Medical Informatics, School of Life Sciences FHNWInstitute for Medical Engineering and Medical Informatics, School of Life Sciences FHNWInstitute for Medical Engineering and Medical Informatics, School of Life Sciences FHNWInstitute for Medical Engineering and Medical Informatics, School of Life Sciences FHNWNeurocritical Care Unit, Department of Neurosurgery and Institute of Intensive Care Medicine, Clinical Neuroscience Center, University Hospital Zurich and University of ZurichNeurocritical Care Unit, Department of Neurosurgery and Institute of Intensive Care Medicine, Clinical Neuroscience Center, University Hospital Zurich and University of ZurichAbstract Explainable artificial intelligence (XAI) has emerged as a promising solution for addressing the implementation challenges of AI/ML in healthcare. However, little is known about how developers and clinicians interpret XAI and what conflicting goals and requirements they may have. This paper presents the findings of a longitudinal multi-method study involving 112 developers and clinicians co-designing an XAI solution for a clinical decision support system. Our study identifies three key differences between developer and clinician mental models of XAI, including opposing goals (model interpretability vs. clinical plausibility), different sources of truth (data vs. patient), and the role of exploring new vs. exploiting old knowledge. Based on our findings, we propose design solutions that can help address the XAI conundrum in healthcare, including the use of causal inference models, personalized explanations, and ambidexterity between exploration and exploitation mindsets. Our study highlights the importance of considering the perspectives of both developers and clinicians in the design of XAI systems and provides practical recommendations for improving the effectiveness and usability of XAI in healthcare.https://doi.org/10.1038/s41746-023-00837-4 |
spellingShingle | Nadine Bienefeld Jens Michael Boss Rahel Lüthy Dominique Brodbeck Jan Azzati Mirco Blaser Jan Willms Emanuela Keller Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals npj Digital Medicine |
title | Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals |
title_full | Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals |
title_fullStr | Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals |
title_full_unstemmed | Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals |
title_short | Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals |
title_sort | solving the explainable ai conundrum by bridging clinicians needs and developers goals |
url | https://doi.org/10.1038/s41746-023-00837-4 |
work_keys_str_mv | AT nadinebienefeld solvingtheexplainableaiconundrumbybridgingcliniciansneedsanddevelopersgoals AT jensmichaelboss solvingtheexplainableaiconundrumbybridgingcliniciansneedsanddevelopersgoals AT rahelluthy solvingtheexplainableaiconundrumbybridgingcliniciansneedsanddevelopersgoals AT dominiquebrodbeck solvingtheexplainableaiconundrumbybridgingcliniciansneedsanddevelopersgoals AT janazzati solvingtheexplainableaiconundrumbybridgingcliniciansneedsanddevelopersgoals AT mircoblaser solvingtheexplainableaiconundrumbybridgingcliniciansneedsanddevelopersgoals AT janwillms solvingtheexplainableaiconundrumbybridgingcliniciansneedsanddevelopersgoals AT emanuelakeller solvingtheexplainableaiconundrumbybridgingcliniciansneedsanddevelopersgoals |