AI Trust: Can Explainable AI Enhance Warranted Trust?
Explainable artificial intelligence (XAI), known to produce explanations so that predictions from AI models can be understood, is commonly used to mitigate possible AI mistrust. The underlying premise is that the explanations of the XAI models enhance AI trust. However, such an increase may depend o...
Main Authors: | Regina de Brito Duarte, Filipa Correia, Patrícia Arriaga, Ana Paiva |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi-Wiley
2023-01-01
|
Series: | Human Behavior and Emerging Technologies |
Online Access: | http://dx.doi.org/10.1155/2023/4637678 |
Similar Items
-
Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma
by: Tirtha Chanda, et al.
Published: (2024-01-01) -
The PRC considers military AI ethics: Can autonomy be trusted?
by: Mark Metcalf
Published: (2022-10-01) -
Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance
by: Robert R. Hoffman, et al.
Published: (2023-02-01) -
AI-Enabled Trust in Distributed Networks
by: Zhiqi Li, et al.
Published: (2023-01-01) -
Trusted journalism in the age of generative AI
by: Borchardt, A, et al.
Published: (2024)