Evaluating perceptual and semantic interpretability of saliency methods: A case study of melanoma

Abstract In order to be useful, XAI explanations have to be faithful to the AI system they seek to elucidate and also interpretable to the people that engage with them. There exist multiple algorithmic methods for assessing faithfulness, but this is not so for interpretability, which is typically on...

Full description

Bibliographic Details
Main Authors: Harshit Bokadia, Scott Cheng‐Hsin Yang, Zhaobin Li, Tomas Folke, Patrick Shafto
Format: Article
Language:English
Published: Wiley 2022-09-01
Series:Applied AI Letters
Subjects:
Online Access:https://doi.org/10.1002/ail2.77