As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI

Abstract Background We focus on the importance of interpreting the quality of the labeling used as the input of predictive models to understand the reliability of their output in support of human decision-making, especially in critical domains, such as medicine. Methods Accordingly, we propose a fra...

Full description

Bibliographic Details
Main Authors: Federico Cabitza, Andrea Campagner, Luca Maria Sconfienza
Format: Article
Language:English
Published: BMC 2020-09-01
Series:BMC Medical Informatics and Decision Making
Subjects:
Online Access:http://link.springer.com/article/10.1186/s12911-020-01224-9
_version_ 1828484468082475008
author Federico Cabitza
Andrea Campagner
Luca Maria Sconfienza
author_facet Federico Cabitza
Andrea Campagner
Luca Maria Sconfienza
author_sort Federico Cabitza
collection DOAJ
description Abstract Background We focus on the importance of interpreting the quality of the labeling used as the input of predictive models to understand the reliability of their output in support of human decision-making, especially in critical domains, such as medicine. Methods Accordingly, we propose a framework distinguishing the reference labeling (or Gold Standard) from the set of annotations from which it is usually derived (the Diamond Standard). We define a set of quality dimensions and related metrics: representativeness (are the available data representative of its reference population?); reliability (do the raters agree with each other in their ratings?); and accuracy (are the raters’ annotations a true representation?). The metrics for these dimensions are, respectively, the degree of correspondence, Ψ, the degree of weighted concordance ϱ, and the degree of fineness, Φ. We apply and evaluate these metrics in a diagnostic user study involving 13 radiologists. Results We evaluate Ψ against hypothesis-testing techniques, highlighting that our metrics can better evaluate distribution similarity in high-dimensional spaces. We discuss how Ψ could be used to assess the reliability of new predictions or for train-test selection. We report the value of ϱ for our case study and compare it with traditional reliability metrics, highlighting both their theoretical properties and the reasons that they differ. Then, we report the degree of fineness as an estimate of the accuracy of the collected annotations and discuss the relationship between this latter degree and the degree of weighted concordance, which we find to be moderately but significantly correlated. Finally, we discuss the implications of the proposed dimensions and metrics with respect to the context of Explainable Artificial Intelligence (XAI). Conclusion We propose different dimensions and related metrics to assess the quality of the datasets used to build predictive models and Medical Artificial Intelligence (MAI). We argue that the proposed metrics are feasible for application in real-world settings for the continuous development of trustable and interpretable MAI systems.
first_indexed 2024-12-11T08:56:19Z
format Article
id doaj.art-a3c7b490f6a141ec98e8ac3a86122b34
institution Directory Open Access Journal
issn 1472-6947
language English
last_indexed 2024-12-11T08:56:19Z
publishDate 2020-09-01
publisher BMC
record_format Article
series BMC Medical Informatics and Decision Making
spelling doaj.art-a3c7b490f6a141ec98e8ac3a86122b342022-12-22T01:13:52ZengBMCBMC Medical Informatics and Decision Making1472-69472020-09-0120112110.1186/s12911-020-01224-9As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AIFederico Cabitza0Andrea Campagner1Luca Maria Sconfienza2Dipartimento di Informatica, Sistemistica e Comunicazione, Universitá degli Studi di Milano-BicoccaIRCCS Istituto Ortopedico GaleazziIRCCS Istituto Ortopedico GaleazziAbstract Background We focus on the importance of interpreting the quality of the labeling used as the input of predictive models to understand the reliability of their output in support of human decision-making, especially in critical domains, such as medicine. Methods Accordingly, we propose a framework distinguishing the reference labeling (or Gold Standard) from the set of annotations from which it is usually derived (the Diamond Standard). We define a set of quality dimensions and related metrics: representativeness (are the available data representative of its reference population?); reliability (do the raters agree with each other in their ratings?); and accuracy (are the raters’ annotations a true representation?). The metrics for these dimensions are, respectively, the degree of correspondence, Ψ, the degree of weighted concordance ϱ, and the degree of fineness, Φ. We apply and evaluate these metrics in a diagnostic user study involving 13 radiologists. Results We evaluate Ψ against hypothesis-testing techniques, highlighting that our metrics can better evaluate distribution similarity in high-dimensional spaces. We discuss how Ψ could be used to assess the reliability of new predictions or for train-test selection. We report the value of ϱ for our case study and compare it with traditional reliability metrics, highlighting both their theoretical properties and the reasons that they differ. Then, we report the degree of fineness as an estimate of the accuracy of the collected annotations and discuss the relationship between this latter degree and the degree of weighted concordance, which we find to be moderately but significantly correlated. Finally, we discuss the implications of the proposed dimensions and metrics with respect to the context of Explainable Artificial Intelligence (XAI). Conclusion We propose different dimensions and related metrics to assess the quality of the datasets used to build predictive models and Medical Artificial Intelligence (MAI). We argue that the proposed metrics are feasible for application in real-world settings for the continuous development of trustable and interpretable MAI systems.http://link.springer.com/article/10.1186/s12911-020-01224-9Gold standardExplainable AIMachine learningReliabilityUsable AI
spellingShingle Federico Cabitza
Andrea Campagner
Luca Maria Sconfienza
As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI
BMC Medical Informatics and Decision Making
Gold standard
Explainable AI
Machine learning
Reliability
Usable AI
title As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI
title_full As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI
title_fullStr As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI
title_full_unstemmed As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI
title_short As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI
title_sort as if sand were stone new concepts and metrics to probe the ground on which to build trustable ai
topic Gold standard
Explainable AI
Machine learning
Reliability
Usable AI
url http://link.springer.com/article/10.1186/s12911-020-01224-9
work_keys_str_mv AT federicocabitza asifsandwerestonenewconceptsandmetricstoprobethegroundonwhichtobuildtrustableai
AT andreacampagner asifsandwerestonenewconceptsandmetricstoprobethegroundonwhichtobuildtrustableai
AT lucamariasconfienza asifsandwerestonenewconceptsandmetricstoprobethegroundonwhichtobuildtrustableai