Building more explainable artificial intelligence with argumentation
Currently, much of machine learning is opaque, just like a “black box”. However, in order for humans to understand, trust and effectively manage the emerging AI systems, an AI needs to be able to explain its decisions and conclusions. In this paper, I propose an argumentation-based approach to expla...
Main Authors: | Zeng, Zhiwei, Miao, Chunyan, Leung, Cyril, Chin, Jing Jih |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Conference Paper |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16762 https://hdl.handle.net/10356/139223 |
Similar Items
-
Computing argumentative explanations in bipolar argumentation frameworks
by: Miao, Chunyan, et al.
Published: (2019) -
Explainable Artificial Intelligence in education
by: Hassan Khosravi, et al.
Published: (2022-01-01) -
Explainable Artificial Intelligence (XAI): Concepts and Challenges in Healthcare
by: Tim Hulsen
Published: (2023-08-01) -
Achieving descriptive accuracy in explanations via argumentation: The case of probabilistic classifiers
by: Emanuele Albini, et al.
Published: (2023-04-01) -
AI empowered context-aware smart system for medication adherence
by: Qiong Wu, et al.
Published: (2017-06-01)