A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare
Trustworthy medical AI requires transparency about the development and testing of underlying algorithms to identify biases and communicate potential risks of harm. Abundant guidance exists on how to achieve transparency for medical AI products, but it is unclear whether publicly available informatio...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2024-02-01
|
Series: | Frontiers in Digital Health |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fdgth.2024.1267290/full |
_version_ | 1797300095699386368 |
---|---|
author | Jana Fehr Jana Fehr Jana Fehr Brian Citro Rohit Malpani Christoph Lippert Christoph Lippert Christoph Lippert Vince I. Madai Vince I. Madai |
author_facet | Jana Fehr Jana Fehr Jana Fehr Brian Citro Rohit Malpani Christoph Lippert Christoph Lippert Christoph Lippert Vince I. Madai Vince I. Madai |
author_sort | Jana Fehr |
collection | DOAJ |
description | Trustworthy medical AI requires transparency about the development and testing of underlying algorithms to identify biases and communicate potential risks of harm. Abundant guidance exists on how to achieve transparency for medical AI products, but it is unclear whether publicly available information adequately informs about their risks. To assess this, we retrieved public documentation on the 14 available CE-certified AI-based radiology products of the II b risk category in the EU from vendor websites, scientific publications, and the European EUDAMED database. Using a self-designed survey, we reported on their development, validation, ethical considerations, and deployment caveats, according to trustworthy AI guidelines. We scored each question with either 0, 0.5, or 1, to rate if the required information was “unavailable”, “partially available,” or “fully available.” The transparency of each product was calculated relative to all 55 questions. Transparency scores ranged from 6.4% to 60.9%, with a median of 29.1%. Major transparency gaps included missing documentation on training data, ethical considerations, and limitations for deployment. Ethical aspects like consent, safety monitoring, and GDPR-compliance were rarely documented. Furthermore, deployment caveats for different demographics and medical settings were scarce. In conclusion, public documentation of authorized medical AI products in Europe lacks sufficient public transparency to inform about safety and risks. We call on lawmakers and regulators to establish legally mandated requirements for public and substantive transparency to fulfill the promise of trustworthy AI for health. |
first_indexed | 2024-03-07T23:01:18Z |
format | Article |
id | doaj.art-96f681f9ef314e57b6b1661a46955ad5 |
institution | Directory Open Access Journal |
issn | 2673-253X |
language | English |
last_indexed | 2024-03-07T23:01:18Z |
publishDate | 2024-02-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Digital Health |
spelling | doaj.art-96f681f9ef314e57b6b1661a46955ad52024-02-22T14:01:48ZengFrontiers Media S.A.Frontiers in Digital Health2673-253X2024-02-01610.3389/fdgth.2024.12672901267290A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcareJana Fehr0Jana Fehr1Jana Fehr2Brian Citro3Rohit Malpani4Christoph Lippert5Christoph Lippert6Christoph Lippert7Vince I. Madai8Vince I. Madai9Digital Health & Machine Learning, Hasso Plattner Institute, Potsdam, GermanyDigital Engineering Faculty, University of Potsdam, Potsdam, GermanyQUEST Center for Responsible Research, Berlin Institute of Health (BIH), Charité Universitätsmedizin Berlin, Berlin, GermanyIndependent Researcher, Chicago, IL, United StatesConsultant, Paris, FranceDigital Health & Machine Learning, Hasso Plattner Institute, Potsdam, GermanyDigital Engineering Faculty, University of Potsdam, Potsdam, GermanyHasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, United StatesQUEST Center for Responsible Research, Berlin Institute of Health (BIH), Charité Universitätsmedizin Berlin, Berlin, GermanyFaculty of Computing, Engineering and the Built Environment, School of Computing and Digital Technology, Birmingham City University, Birmingham, United KingdomTrustworthy medical AI requires transparency about the development and testing of underlying algorithms to identify biases and communicate potential risks of harm. Abundant guidance exists on how to achieve transparency for medical AI products, but it is unclear whether publicly available information adequately informs about their risks. To assess this, we retrieved public documentation on the 14 available CE-certified AI-based radiology products of the II b risk category in the EU from vendor websites, scientific publications, and the European EUDAMED database. Using a self-designed survey, we reported on their development, validation, ethical considerations, and deployment caveats, according to trustworthy AI guidelines. We scored each question with either 0, 0.5, or 1, to rate if the required information was “unavailable”, “partially available,” or “fully available.” The transparency of each product was calculated relative to all 55 questions. Transparency scores ranged from 6.4% to 60.9%, with a median of 29.1%. Major transparency gaps included missing documentation on training data, ethical considerations, and limitations for deployment. Ethical aspects like consent, safety monitoring, and GDPR-compliance were rarely documented. Furthermore, deployment caveats for different demographics and medical settings were scarce. In conclusion, public documentation of authorized medical AI products in Europe lacks sufficient public transparency to inform about safety and risks. We call on lawmakers and regulators to establish legally mandated requirements for public and substantive transparency to fulfill the promise of trustworthy AI for health.https://www.frontiersin.org/articles/10.3389/fdgth.2024.1267290/fullmedical AIAI ethicstransparencymedical device regulationtrustworthy AI |
spellingShingle | Jana Fehr Jana Fehr Jana Fehr Brian Citro Rohit Malpani Christoph Lippert Christoph Lippert Christoph Lippert Vince I. Madai Vince I. Madai A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare Frontiers in Digital Health medical AI AI ethics transparency medical device regulation trustworthy AI |
title | A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare |
title_full | A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare |
title_fullStr | A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare |
title_full_unstemmed | A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare |
title_short | A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare |
title_sort | trustworthy ai reality check the lack of transparency of artificial intelligence products in healthcare |
topic | medical AI AI ethics transparency medical device regulation trustworthy AI |
url | https://www.frontiersin.org/articles/10.3389/fdgth.2024.1267290/full |
work_keys_str_mv | AT janafehr atrustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT janafehr atrustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT janafehr atrustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT briancitro atrustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT rohitmalpani atrustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT christophlippert atrustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT christophlippert atrustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT christophlippert atrustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT vinceimadai atrustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT vinceimadai atrustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT janafehr trustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT janafehr trustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT janafehr trustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT briancitro trustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT rohitmalpani trustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT christophlippert trustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT christophlippert trustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT christophlippert trustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT vinceimadai trustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare AT vinceimadai trustworthyairealitycheckthelackoftransparencyofartificialintelligenceproductsinhealthcare |