As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI
Abstract Background We focus on the importance of interpreting the quality of the labeling used as the input of predictive models to understand the reliability of their output in support of human decision-making, especially in critical domains, such as medicine. Methods Accordingly, we propose a fra...
Main Authors: | Federico Cabitza, Andrea Campagner, Luca Maria Sconfienza |
---|---|
Format: | Article |
Language: | English |
Published: |
BMC
2020-09-01
|
Series: | BMC Medical Informatics and Decision Making |
Subjects: | |
Online Access: | http://link.springer.com/article/10.1186/s12911-020-01224-9 |
Similar Items
-
Need for UAI–Anatomy of the Paradigm of Usable Artificial Intelligence for Domain-Specific AI Applicability
by: Hajo Wiemer, et al.
Published: (2023-02-01) -
Editorial: Explainable, Trustworthy, and Responsible AI for the Financial Service Industry
by: Branka Hadji Misheva, et al.
Published: (2022-05-01) -
Painting the Black Box White: Experimental Findings from Applying XAI to an ECG Reading Setting
by: Federico Cabitza, et al.
Published: (2023-03-01) -
Understanding the Behavior of Gas Sensors Using Explainable AI
by: Sanghamitra Chakraborty, et al.
Published: (2022-11-01) -
AI anxiety: Explication and exploration of effect on state anxiety when interacting with AI doctors
by: Hyun Yang, et al.
Published: (2025-03-01)