Provenance documentation to enable explainable and trustworthy AI: A literature review

ABSTRACTRecently artificial intelligence (AI) and machine learning (ML) models have demonstrated remarkable progress with applications developed in various domains. It is also increasingly discussed that AI and ML models and applications should be transparent, explainable, and trustw...

Full description

Bibliographic Details
Main Authors: Amruta Kale, Tin Nguyen, Frederick C. Harris, Chenhao Li, Jiyin Zhang, Xiaogang Ma
Format: Article
Language:English
Published: The MIT Press 2023-01-01
Series:Data Intelligence
Online Access:https://direct.mit.edu/dint/article/5/1/139/109494/Provenance-documentation-to-enable-explainable-and
Description
Summary:ABSTRACTRecently artificial intelligence (AI) and machine learning (ML) models have demonstrated remarkable progress with applications developed in various domains. It is also increasingly discussed that AI and ML models and applications should be transparent, explainable, and trustworthy. Accordingly, the field of Explainable AI (XAI) is expanding rapidly. XAI holds substantial promise for improving trust and transparency in AI-based systems by explaining how complex models such as the deep neural network (DNN) produces their outcomes. Moreover, many researchers and practitioners consider that using provenance to explain these complex models will help improve transparency in AI-based systems. In this paper, we conduct a systematic literature review of provenance, XAI, and trustworthy AI (TAI) to explain the fundamental concepts and illustrate the potential of using provenance as a medium to help accomplish explainability in AI-based systems. Moreover, we also discuss the patterns of recent developments in this area and offer a vision for research in the near future. We hope this literature review will serve as a starting point for scholars and practitioners interested in learning about essential components of provenance, XAI, and TAI.
ISSN:2641-435X