Exploring the effects of human-centered AI explanations on trust and reliance

Transparency is widely regarded as crucial for the responsible real-world deployment of artificial intelligence (AI) and is considered an essential prerequisite to establishing trust in AI. There are several approaches to enabling transparency, with one promising attempt being human-centered explana...

Full description

Bibliographic Details
Main Authors: Nicolas Scharowski, Sebastian A. C. Perrig, Melanie Svab, Klaus Opwis, Florian Brühlmann
Format: Article
Language:English
Published: Frontiers Media S.A. 2023-07-01
Series:Frontiers in Computer Science
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fcomp.2023.1151150/full
_version_ 1797778317816889344
author Nicolas Scharowski
Sebastian A. C. Perrig
Melanie Svab
Klaus Opwis
Florian Brühlmann
author_facet Nicolas Scharowski
Sebastian A. C. Perrig
Melanie Svab
Klaus Opwis
Florian Brühlmann
author_sort Nicolas Scharowski
collection DOAJ
description Transparency is widely regarded as crucial for the responsible real-world deployment of artificial intelligence (AI) and is considered an essential prerequisite to establishing trust in AI. There are several approaches to enabling transparency, with one promising attempt being human-centered explanations. However, there is little research into the effectiveness of human-centered explanations on end-users' trust. What complicates the comparison of existing empirical work is that trust is measured in different ways. Some researchers measure subjective trust using questionnaires, while others measure objective trust-related behavior such as reliance. To bridge these gaps, we investigated the effects of two promising human-centered post-hoc explanations, feature importance and counterfactuals, on trust and reliance. We compared these two explanations with a control condition in a decision-making experiment (N = 380). Results showed that human-centered explanations can significantly increase reliance but the type of decision-making (increasing a price vs. decreasing a price) had an even greater influence. This challenges the presumed importance of transparency over other factors in human decision-making involving AI, such as potential heuristics and biases. We conclude that trust does not necessarily equate to reliance and emphasize the importance of appropriate, validated, and agreed-upon metrics to design and evaluate human-centered AI.
first_indexed 2024-03-12T23:15:30Z
format Article
id doaj.art-80f8202d94494633afb0fb84b85f8d58
institution Directory Open Access Journal
issn 2624-9898
language English
last_indexed 2024-03-12T23:15:30Z
publishDate 2023-07-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Computer Science
spelling doaj.art-80f8202d94494633afb0fb84b85f8d582023-07-17T08:26:50ZengFrontiers Media S.A.Frontiers in Computer Science2624-98982023-07-01510.3389/fcomp.2023.11511501151150Exploring the effects of human-centered AI explanations on trust and relianceNicolas ScharowskiSebastian A. C. PerrigMelanie SvabKlaus OpwisFlorian BrühlmannTransparency is widely regarded as crucial for the responsible real-world deployment of artificial intelligence (AI) and is considered an essential prerequisite to establishing trust in AI. There are several approaches to enabling transparency, with one promising attempt being human-centered explanations. However, there is little research into the effectiveness of human-centered explanations on end-users' trust. What complicates the comparison of existing empirical work is that trust is measured in different ways. Some researchers measure subjective trust using questionnaires, while others measure objective trust-related behavior such as reliance. To bridge these gaps, we investigated the effects of two promising human-centered post-hoc explanations, feature importance and counterfactuals, on trust and reliance. We compared these two explanations with a control condition in a decision-making experiment (N = 380). Results showed that human-centered explanations can significantly increase reliance but the type of decision-making (increasing a price vs. decreasing a price) had an even greater influence. This challenges the presumed importance of transparency over other factors in human decision-making involving AI, such as potential heuristics and biases. We conclude that trust does not necessarily equate to reliance and emphasize the importance of appropriate, validated, and agreed-upon metrics to design and evaluate human-centered AI.https://www.frontiersin.org/articles/10.3389/fcomp.2023.1151150/fullAIXAIHCXAItrustreliancetransparency
spellingShingle Nicolas Scharowski
Sebastian A. C. Perrig
Melanie Svab
Klaus Opwis
Florian Brühlmann
Exploring the effects of human-centered AI explanations on trust and reliance
Frontiers in Computer Science
AI
XAI
HCXAI
trust
reliance
transparency
title Exploring the effects of human-centered AI explanations on trust and reliance
title_full Exploring the effects of human-centered AI explanations on trust and reliance
title_fullStr Exploring the effects of human-centered AI explanations on trust and reliance
title_full_unstemmed Exploring the effects of human-centered AI explanations on trust and reliance
title_short Exploring the effects of human-centered AI explanations on trust and reliance
title_sort exploring the effects of human centered ai explanations on trust and reliance
topic AI
XAI
HCXAI
trust
reliance
transparency
url https://www.frontiersin.org/articles/10.3389/fcomp.2023.1151150/full
work_keys_str_mv AT nicolasscharowski exploringtheeffectsofhumancenteredaiexplanationsontrustandreliance
AT sebastianacperrig exploringtheeffectsofhumancenteredaiexplanationsontrustandreliance
AT melaniesvab exploringtheeffectsofhumancenteredaiexplanationsontrustandreliance
AT klausopwis exploringtheeffectsofhumancenteredaiexplanationsontrustandreliance
AT florianbruhlmann exploringtheeffectsofhumancenteredaiexplanationsontrustandreliance