Artificial intelligence in medicine and the disclosure of risks

This paper focuses on the use of ‘black box’ AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI’s implicit assumptions and an individual patient’s backgr...

Full description

Bibliographic Details
Main Author: Kiener, MAH
Format: Journal article
Language:English
Published: Springer 2020
_version_ 1797078693392154624
author Kiener, MAH
author_facet Kiener, MAH
author_sort Kiener, MAH
collection OXFORD
description This paper focuses on the use of ‘black box’ AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI’s implicit assumptions and an individual patient’s background situation. Pace current clinical practice, I argue that, under certain circumstances, these risks do need to be disclosed. Otherwise, the physician either vitiates a patient’s informed consent or violates a more general obligation to warn him about potentially harmful consequences. To support this view, I argue, first, that the already widely accepted conditions in the evaluation of risks, i.e. the ‘nature’ and ‘likelihood’ of risks, speak in favour of disclosure and, second, that principled objections against the disclosure of these risks do not withstand scrutiny. Moreover, I also explain that these risks are exacerbated by pandemics like the COVID-19 crisis, which further emphasises their significance.
first_indexed 2024-03-07T00:35:22Z
format Journal article
id oxford-uuid:81365bda-38a9-4d3a-b5e0-7b587475bed1
institution University of Oxford
language English
last_indexed 2024-03-07T00:35:22Z
publishDate 2020
publisher Springer
record_format dspace
spelling oxford-uuid:81365bda-38a9-4d3a-b5e0-7b587475bed12022-03-26T21:28:54ZArtificial intelligence in medicine and the disclosure of risksJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:81365bda-38a9-4d3a-b5e0-7b587475bed1EnglishSymplectic ElementsSpringer2020Kiener, MAHThis paper focuses on the use of ‘black box’ AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI’s implicit assumptions and an individual patient’s background situation. Pace current clinical practice, I argue that, under certain circumstances, these risks do need to be disclosed. Otherwise, the physician either vitiates a patient’s informed consent or violates a more general obligation to warn him about potentially harmful consequences. To support this view, I argue, first, that the already widely accepted conditions in the evaluation of risks, i.e. the ‘nature’ and ‘likelihood’ of risks, speak in favour of disclosure and, second, that principled objections against the disclosure of these risks do not withstand scrutiny. Moreover, I also explain that these risks are exacerbated by pandemics like the COVID-19 crisis, which further emphasises their significance.
spellingShingle Kiener, MAH
Artificial intelligence in medicine and the disclosure of risks
title Artificial intelligence in medicine and the disclosure of risks
title_full Artificial intelligence in medicine and the disclosure of risks
title_fullStr Artificial intelligence in medicine and the disclosure of risks
title_full_unstemmed Artificial intelligence in medicine and the disclosure of risks
title_short Artificial intelligence in medicine and the disclosure of risks
title_sort artificial intelligence in medicine and the disclosure of risks
work_keys_str_mv AT kienermah artificialintelligenceinmedicineandthedisclosureofrisks