Do as AI say: susceptibility in deployment of clinical decision-aids
Artificial intelligence (AI) models for decision support have been developed for clinical settings such as radiology, but little work evaluates the potential impact of such systems. In this study, physicians received chest X-rays and diagnostic advice, some of which was inaccurate, and were asked to...
Main Authors: | , , , , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Springer Science and Business Media LLC
2021
|
Online Access: | https://hdl.handle.net/1721.1/130457 |
_version_ | 1811073569966260224 |
---|---|
author | Gaube, Susanne Suresh, Harini Raue, Martina Julia Merritt, Alexander Berkowitz, Seth J. Lermer, Eva Coughlin, Joseph F Guttag, John V. Colak, Errol Ghassemi, Marzyeh |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Gaube, Susanne Suresh, Harini Raue, Martina Julia Merritt, Alexander Berkowitz, Seth J. Lermer, Eva Coughlin, Joseph F Guttag, John V. Colak, Errol Ghassemi, Marzyeh |
author_sort | Gaube, Susanne |
collection | MIT |
description | Artificial intelligence (AI) models for decision support have been developed for clinical settings such as radiology, but little work evaluates the potential impact of such systems. In this study, physicians received chest X-rays and diagnostic advice, some of which was inaccurate, and were asked to evaluate advice quality and make diagnoses. All advice was generated by human experts, but some was labeled as coming from an AI system. As a group, radiologists rated advice as lower quality when it appeared to come from an AI system; physicians with less task-expertise did not. Diagnostic accuracy was significantly worse when participants received inaccurate advice, regardless of the purported source. This work raises important considerations for how advice, AI and non-AI, should be deployed in clinical environments. |
first_indexed | 2024-09-23T09:35:04Z |
format | Article |
id | mit-1721.1/130457 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T09:35:04Z |
publishDate | 2021 |
publisher | Springer Science and Business Media LLC |
record_format | dspace |
spelling | mit-1721.1/1304572022-09-30T15:27:38Z Do as AI say: susceptibility in deployment of clinical decision-aids Gaube, Susanne Suresh, Harini Raue, Martina Julia Merritt, Alexander Berkowitz, Seth J. Lermer, Eva Coughlin, Joseph F Guttag, John V. Colak, Errol Ghassemi, Marzyeh Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Center for Transportation & Logistics AgeLab (Massachusetts Institute of Technology) Artificial intelligence (AI) models for decision support have been developed for clinical settings such as radiology, but little work evaluates the potential impact of such systems. In this study, physicians received chest X-rays and diagnostic advice, some of which was inaccurate, and were asked to evaluate advice quality and make diagnoses. All advice was generated by human experts, but some was labeled as coming from an AI system. As a group, radiologists rated advice as lower quality when it appeared to come from an AI system; physicians with less task-expertise did not. Diagnostic accuracy was significantly worse when participants received inaccurate advice, regardless of the purported source. This work raises important considerations for how advice, AI and non-AI, should be deployed in clinical environments. 2021-04-12T19:22:25Z 2021-04-12T19:22:25Z 2021-02 2020-07 2021-04-06T15:51:17Z Article http://purl.org/eprint/type/JournalArticle 2398-6352 https://hdl.handle.net/1721.1/130457 Gaube, Susanne et al. "Do as AI say: susceptibility in deployment of clinical decision-aids." npj Digital Medicine 4, 1 (February 2021): doi.org/10.1038/s41746-021-00385-9. © 2021 The Author(s). en http://dx.doi.org/10.1038/s41746-021-00385-9 npj Digital Medicine Creative Commons Attribution 4.0 International license https://creativecommons.org/licenses/by/4.0/ application/pdf Springer Science and Business Media LLC Nature |
spellingShingle | Gaube, Susanne Suresh, Harini Raue, Martina Julia Merritt, Alexander Berkowitz, Seth J. Lermer, Eva Coughlin, Joseph F Guttag, John V. Colak, Errol Ghassemi, Marzyeh Do as AI say: susceptibility in deployment of clinical decision-aids |
title | Do as AI say: susceptibility in deployment of clinical decision-aids |
title_full | Do as AI say: susceptibility in deployment of clinical decision-aids |
title_fullStr | Do as AI say: susceptibility in deployment of clinical decision-aids |
title_full_unstemmed | Do as AI say: susceptibility in deployment of clinical decision-aids |
title_short | Do as AI say: susceptibility in deployment of clinical decision-aids |
title_sort | do as ai say susceptibility in deployment of clinical decision aids |
url | https://hdl.handle.net/1721.1/130457 |
work_keys_str_mv | AT gaubesusanne doasaisaysusceptibilityindeploymentofclinicaldecisionaids AT sureshharini doasaisaysusceptibilityindeploymentofclinicaldecisionaids AT rauemartinajulia doasaisaysusceptibilityindeploymentofclinicaldecisionaids AT merrittalexander doasaisaysusceptibilityindeploymentofclinicaldecisionaids AT berkowitzsethj doasaisaysusceptibilityindeploymentofclinicaldecisionaids AT lermereva doasaisaysusceptibilityindeploymentofclinicaldecisionaids AT coughlinjosephf doasaisaysusceptibilityindeploymentofclinicaldecisionaids AT guttagjohnv doasaisaysusceptibilityindeploymentofclinicaldecisionaids AT colakerrol doasaisaysusceptibilityindeploymentofclinicaldecisionaids AT ghassemimarzyeh doasaisaysusceptibilityindeploymentofclinicaldecisionaids |