Inhibitors and Enablers to Explainable AI Success: A Systematic Examination of Explanation Complexity and Individual Characteristics

With the increasing adaptability and complexity of advisory artificial intelligence (AI)-based agents, the topics of explainable AI and human-centered AI are moving close together. Variations in the explanation itself have been widely studied, with some contradictory results. These could be due to u...

Full description

Bibliographic Details
Main Authors: Carolin Wienrich, Astrid Carolus, David Roth-Isigkeit, Andreas Hotho
Format: Article
Language:English
Published: MDPI AG 2022-11-01
Series:Multimodal Technologies and Interaction
Subjects:
Online Access:https://www.mdpi.com/2414-4088/6/12/106
_version_ 1797456039933640704
author Carolin Wienrich
Astrid Carolus
David Roth-Isigkeit
Andreas Hotho
author_facet Carolin Wienrich
Astrid Carolus
David Roth-Isigkeit
Andreas Hotho
author_sort Carolin Wienrich
collection DOAJ
description With the increasing adaptability and complexity of advisory artificial intelligence (AI)-based agents, the topics of explainable AI and human-centered AI are moving close together. Variations in the explanation itself have been widely studied, with some contradictory results. These could be due to users’ individual differences, which have rarely been systematically studied regarding their inhibiting or enabling effect on the fulfillment of explanation objectives (such as trust, understanding, or workload). This paper aims to shed light on the significance of human dimensions (gender, age, trust disposition, need for cognition, affinity for technology, self-efficacy, attitudes, and mind attribution) as well as their interplay with different explanation modes (no, simple, or complex explanation). Participants played the game <i>Deal or No Deal</i> while interacting with an AI-based agent. The agent gave advice to the participants on whether they should accept or reject the deals offered to them. As expected, giving an explanation had a positive influence on the explanation objectives. However, the users’ individual characteristics particularly reinforced the fulfillment of the objectives. The strongest predictor of objective fulfillment was the degree of attribution of human characteristics. The more human characteristics were attributed, the more trust was placed in the agent, advice was more likely to be accepted and understood, and important needs were satisfied during the interaction. Thus, the current work contributes to a better understanding of the design of explanations of an AI-based agent system that takes into account individual characteristics and meets the demand for both explainable and human-centered agent systems.
first_indexed 2024-03-09T16:01:44Z
format Article
id doaj.art-daf83433b2584712868fd83fa456631f
institution Directory Open Access Journal
issn 2414-4088
language English
last_indexed 2024-03-09T16:01:44Z
publishDate 2022-11-01
publisher MDPI AG
record_format Article
series Multimodal Technologies and Interaction
spelling doaj.art-daf83433b2584712868fd83fa456631f2023-11-24T17:02:38ZengMDPI AGMultimodal Technologies and Interaction2414-40882022-11-0161210610.3390/mti6120106Inhibitors and Enablers to Explainable AI Success: A Systematic Examination of Explanation Complexity and Individual CharacteristicsCarolin Wienrich0Astrid Carolus1David Roth-Isigkeit2Andreas Hotho3Psychology of Intelligent Interactive Systems, University of Würzburg, 97070 Würzburg, GermanyMedia Psychology, University of Würzburg, 97070 Würzburg, GermanyCenter for Social Implications of Artificial Intelligence, University of Würzburg, 97070 Würzburg, GermanyData Science, University of Würzburg, 97070 Würzburg, GermanyWith the increasing adaptability and complexity of advisory artificial intelligence (AI)-based agents, the topics of explainable AI and human-centered AI are moving close together. Variations in the explanation itself have been widely studied, with some contradictory results. These could be due to users’ individual differences, which have rarely been systematically studied regarding their inhibiting or enabling effect on the fulfillment of explanation objectives (such as trust, understanding, or workload). This paper aims to shed light on the significance of human dimensions (gender, age, trust disposition, need for cognition, affinity for technology, self-efficacy, attitudes, and mind attribution) as well as their interplay with different explanation modes (no, simple, or complex explanation). Participants played the game <i>Deal or No Deal</i> while interacting with an AI-based agent. The agent gave advice to the participants on whether they should accept or reject the deals offered to them. As expected, giving an explanation had a positive influence on the explanation objectives. However, the users’ individual characteristics particularly reinforced the fulfillment of the objectives. The strongest predictor of objective fulfillment was the degree of attribution of human characteristics. The more human characteristics were attributed, the more trust was placed in the agent, advice was more likely to be accepted and understood, and important needs were satisfied during the interaction. Thus, the current work contributes to a better understanding of the design of explanations of an AI-based agent system that takes into account individual characteristics and meets the demand for both explainable and human-centered agent systems.https://www.mdpi.com/2414-4088/6/12/106explainable AIhuman-centered AIrecommender agentexplanation complexityindividual differences
spellingShingle Carolin Wienrich
Astrid Carolus
David Roth-Isigkeit
Andreas Hotho
Inhibitors and Enablers to Explainable AI Success: A Systematic Examination of Explanation Complexity and Individual Characteristics
Multimodal Technologies and Interaction
explainable AI
human-centered AI
recommender agent
explanation complexity
individual differences
title Inhibitors and Enablers to Explainable AI Success: A Systematic Examination of Explanation Complexity and Individual Characteristics
title_full Inhibitors and Enablers to Explainable AI Success: A Systematic Examination of Explanation Complexity and Individual Characteristics
title_fullStr Inhibitors and Enablers to Explainable AI Success: A Systematic Examination of Explanation Complexity and Individual Characteristics
title_full_unstemmed Inhibitors and Enablers to Explainable AI Success: A Systematic Examination of Explanation Complexity and Individual Characteristics
title_short Inhibitors and Enablers to Explainable AI Success: A Systematic Examination of Explanation Complexity and Individual Characteristics
title_sort inhibitors and enablers to explainable ai success a systematic examination of explanation complexity and individual characteristics
topic explainable AI
human-centered AI
recommender agent
explanation complexity
individual differences
url https://www.mdpi.com/2414-4088/6/12/106
work_keys_str_mv AT carolinwienrich inhibitorsandenablerstoexplainableaisuccessasystematicexaminationofexplanationcomplexityandindividualcharacteristics
AT astridcarolus inhibitorsandenablerstoexplainableaisuccessasystematicexaminationofexplanationcomplexityandindividualcharacteristics
AT davidrothisigkeit inhibitorsandenablerstoexplainableaisuccessasystematicexaminationofexplanationcomplexityandindividualcharacteristics
AT andreashotho inhibitorsandenablerstoexplainableaisuccessasystematicexaminationofexplanationcomplexityandindividualcharacteristics