Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric Perspective
Explainable Artificial Intelligence (XAI) has successfully solved the black box paradox of Artificial Intelligence (AI). By providing human-level insights on AI, it allowed users to understand its inner workings even with limited knowledge of the machine learning algorithms it uses. As a result, the...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-03-01
|
Series: | Informatics |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-9709/10/1/32 |
_version_ | 1797611084073402368 |
---|---|
author | Ezekiel Bernardo Rosemary Seva |
author_facet | Ezekiel Bernardo Rosemary Seva |
author_sort | Ezekiel Bernardo |
collection | DOAJ |
description | Explainable Artificial Intelligence (XAI) has successfully solved the black box paradox of Artificial Intelligence (AI). By providing human-level insights on AI, it allowed users to understand its inner workings even with limited knowledge of the machine learning algorithms it uses. As a result, the field grew, and development flourished. However, concerns have been expressed that the techniques are limited in terms of to whom they are applicable and how their effect can be leveraged. Currently, most XAI techniques have been designed by developers. Though needed and valuable, XAI is more critical for an end-user, considering transparency cleaves on trust and adoption. This study aims to understand and conceptualize an end-user-centric XAI to fill in the lack of end-user understanding. Considering recent findings of related studies, this study focuses on design conceptualization and affective analysis. Data from 202 participants were collected from an online survey to identify the vital XAI design components and testbed experimentation to explore the affective and trust change per design configuration. The results show that affective is a viable trust calibration route for XAI. In terms of design, explanation form, communication style, and presence of supplementary information are the components users look for in an effective XAI. Lastly, anxiety about AI, incidental emotion, perceived AI reliability, and experience using the system are significant moderators of the trust calibration process for an end-user. |
first_indexed | 2024-03-11T06:23:52Z |
format | Article |
id | doaj.art-3910b4b6d44d474193a03935ec4f6f73 |
institution | Directory Open Access Journal |
issn | 2227-9709 |
language | English |
last_indexed | 2024-03-11T06:23:52Z |
publishDate | 2023-03-01 |
publisher | MDPI AG |
record_format | Article |
series | Informatics |
spelling | doaj.art-3910b4b6d44d474193a03935ec4f6f732023-11-17T11:43:38ZengMDPI AGInformatics2227-97092023-03-011013210.3390/informatics10010032Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric PerspectiveEzekiel Bernardo0Rosemary Seva1Industrial and Systems Engineering Department, De La Salle University—Manila, 2401 Taft Ave, Malate, Manila 1004, PhilippinesIndustrial and Systems Engineering Department, De La Salle University—Manila, 2401 Taft Ave, Malate, Manila 1004, PhilippinesExplainable Artificial Intelligence (XAI) has successfully solved the black box paradox of Artificial Intelligence (AI). By providing human-level insights on AI, it allowed users to understand its inner workings even with limited knowledge of the machine learning algorithms it uses. As a result, the field grew, and development flourished. However, concerns have been expressed that the techniques are limited in terms of to whom they are applicable and how their effect can be leveraged. Currently, most XAI techniques have been designed by developers. Though needed and valuable, XAI is more critical for an end-user, considering transparency cleaves on trust and adoption. This study aims to understand and conceptualize an end-user-centric XAI to fill in the lack of end-user understanding. Considering recent findings of related studies, this study focuses on design conceptualization and affective analysis. Data from 202 participants were collected from an online survey to identify the vital XAI design components and testbed experimentation to explore the affective and trust change per design configuration. The results show that affective is a viable trust calibration route for XAI. In terms of design, explanation form, communication style, and presence of supplementary information are the components users look for in an effective XAI. Lastly, anxiety about AI, incidental emotion, perceived AI reliability, and experience using the system are significant moderators of the trust calibration process for an end-user.https://www.mdpi.com/2227-9709/10/1/32explainable AIXAIartificial intelligenceAIinterpretable deep learningmachine learning |
spellingShingle | Ezekiel Bernardo Rosemary Seva Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric Perspective Informatics explainable AI XAI artificial intelligence AI interpretable deep learning machine learning |
title | Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric Perspective |
title_full | Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric Perspective |
title_fullStr | Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric Perspective |
title_full_unstemmed | Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric Perspective |
title_short | Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric Perspective |
title_sort | affective design analysis of explainable artificial intelligence xai a user centric perspective |
topic | explainable AI XAI artificial intelligence AI interpretable deep learning machine learning |
url | https://www.mdpi.com/2227-9709/10/1/32 |
work_keys_str_mv | AT ezekielbernardo affectivedesignanalysisofexplainableartificialintelligencexaiausercentricperspective AT rosemaryseva affectivedesignanalysisofexplainableartificialintelligencexaiausercentricperspective |