Trust perceptions towards XAI in healthcare

This study explores the perceptions of medical professionals towards artificial intelligence (AI) and the influence of Explainable AI (XAI) algorithms on their trust in AI. The research focuses on one-on-one interviews conducted with 12 medical students, divided into two groups of six, each group ex...

Full description

Bibliographic Details
Main Author: Cai, Xinrui
Other Authors: Fan Xiuyi
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175388
_version_ 1826125935245524992
author Cai, Xinrui
author2 Fan Xiuyi
author_facet Fan Xiuyi
Cai, Xinrui
author_sort Cai, Xinrui
collection NTU
description This study explores the perceptions of medical professionals towards artificial intelligence (AI) and the influence of Explainable AI (XAI) algorithms on their trust in AI. The research focuses on one-on-one interviews conducted with 12 medical students, divided into two groups of six, each group exposed to one of two XAI algorithms: SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations), explaining how Random Forest Classifier predicted lung cancer from a set of predictive attributes including demographic information, basic health metrics, lifestyle habits and findings from genetic PCA (Principal Component Analyses). Additionally, online surveys were administered to 50 medical students on the same case as interviews. Both interviews and surveys had selected participants ensuring equal representation from two medical schools in Singapore, with considerations for gender and self-rated confidence in AI knowledge. Qualitative data from interviews were analysed using Reflexive Thematic Analysis, revealing themes related to trust in AI and perceptions of XAI algorithms. Quantitative data were analysed using Microsoft Excel to visualize trends and patterns in the survey responses. Four research questions were developed in this study and the findings suggest that the type of XAI algorithm used does not significant impact medical professionals' trust in AI, and XAI’s impact on trust of medical professionals on AI may not be direct and requires more research and improving on XAI output to achieve its intended purposes. This study contributes to the understanding of how XAI can enhance trust in AI among medical professionals, with implications for the design and implementation of AI systems in healthcare settings.
first_indexed 2024-10-01T06:44:51Z
format Final Year Project (FYP)
id ntu-10356/175388
institution Nanyang Technological University
language English
last_indexed 2024-10-01T06:44:51Z
publishDate 2024
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1753882024-04-26T15:43:02Z Trust perceptions towards XAI in healthcare Cai, Xinrui Fan Xiuyi School of Computer Science and Engineering xyfan@ntu.edu.sg Computer and Information Science This study explores the perceptions of medical professionals towards artificial intelligence (AI) and the influence of Explainable AI (XAI) algorithms on their trust in AI. The research focuses on one-on-one interviews conducted with 12 medical students, divided into two groups of six, each group exposed to one of two XAI algorithms: SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations), explaining how Random Forest Classifier predicted lung cancer from a set of predictive attributes including demographic information, basic health metrics, lifestyle habits and findings from genetic PCA (Principal Component Analyses). Additionally, online surveys were administered to 50 medical students on the same case as interviews. Both interviews and surveys had selected participants ensuring equal representation from two medical schools in Singapore, with considerations for gender and self-rated confidence in AI knowledge. Qualitative data from interviews were analysed using Reflexive Thematic Analysis, revealing themes related to trust in AI and perceptions of XAI algorithms. Quantitative data were analysed using Microsoft Excel to visualize trends and patterns in the survey responses. Four research questions were developed in this study and the findings suggest that the type of XAI algorithm used does not significant impact medical professionals' trust in AI, and XAI’s impact on trust of medical professionals on AI may not be direct and requires more research and improving on XAI output to achieve its intended purposes. This study contributes to the understanding of how XAI can enhance trust in AI among medical professionals, with implications for the design and implementation of AI systems in healthcare settings. Bachelor's degree 2024-04-24T01:01:31Z 2024-04-24T01:01:31Z 2024 Final Year Project (FYP) Cai, X. (2024). Trust perceptions towards XAI in healthcare. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175388 https://hdl.handle.net/10356/175388 en SCSE23-0700 application/pdf Nanyang Technological University
spellingShingle Computer and Information Science
Cai, Xinrui
Trust perceptions towards XAI in healthcare
title Trust perceptions towards XAI in healthcare
title_full Trust perceptions towards XAI in healthcare
title_fullStr Trust perceptions towards XAI in healthcare
title_full_unstemmed Trust perceptions towards XAI in healthcare
title_short Trust perceptions towards XAI in healthcare
title_sort trust perceptions towards xai in healthcare
topic Computer and Information Science
url https://hdl.handle.net/10356/175388
work_keys_str_mv AT caixinrui trustperceptionstowardsxaiinhealthcare