Exploring the effects of human-centered AI explanations on trust and reliance
Transparency is widely regarded as crucial for the responsible real-world deployment of artificial intelligence (AI) and is considered an essential prerequisite to establishing trust in AI. There are several approaches to enabling transparency, with one promising attempt being human-centered explana...
Main Authors: | Nicolas Scharowski, Sebastian A. C. Perrig, Melanie Svab, Klaus Opwis, Florian Brühlmann |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2023-07-01
|
Series: | Frontiers in Computer Science |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fcomp.2023.1151150/full |
Similar Items
-
Dynamic Selection of Reliance Calibration Cues With AI Reliance Model
by: Yosuke Fukuchi, et al.
Published: (2023-01-01) -
Humans in XAI: increased reliance in decision-making under uncertainty by using explanation strategies
by: Olesja Lammert, et al.
Published: (2024-03-01) -
Assessing Perceived Trust and Satisfaction with Multiple Explanation Techniques in XAI-Enhanced Learning Analytics
by: Saša Brdnik, et al.
Published: (2023-06-01) -
3Es for AI: Economics, Explanation, Epistemology
by: Nitasha Kaul
Published: (2022-03-01) -
Fairness and Explanation in AI-Informed Decision Making
by: Alessa Angerschmid, et al.
Published: (2022-06-01)