Artificial Intelligence Bias in Health Care: Web-Based Survey

BackgroundResources are increasingly spent on artificial intelligence (AI) solutions for medical applications aiming to improve diagnosis, treatment, and prevention of diseases. While the need for transparency and reduction of bias in data and algorithm development has been a...

Full description

Bibliographic Details
Main Authors: Carina Nina Vorisek, Caroline Stellmach, Paula Josephine Mayer, Sophie Anne Ines Klopfenstein, Dominik Martin Bures, Anke Diehl, Maike Henningsen, Kerstin Ritter, Sylvia Thun
Format: Article
Language:English
Published: JMIR Publications 2023-06-01
Series:Journal of Medical Internet Research
Online Access:https://www.jmir.org/2023/1/e41089
_version_ 1797733977356763136
author Carina Nina Vorisek
Caroline Stellmach
Paula Josephine Mayer
Sophie Anne Ines Klopfenstein
Dominik Martin Bures
Anke Diehl
Maike Henningsen
Kerstin Ritter
Sylvia Thun
author_facet Carina Nina Vorisek
Caroline Stellmach
Paula Josephine Mayer
Sophie Anne Ines Klopfenstein
Dominik Martin Bures
Anke Diehl
Maike Henningsen
Kerstin Ritter
Sylvia Thun
author_sort Carina Nina Vorisek
collection DOAJ
description BackgroundResources are increasingly spent on artificial intelligence (AI) solutions for medical applications aiming to improve diagnosis, treatment, and prevention of diseases. While the need for transparency and reduction of bias in data and algorithm development has been addressed in past studies, little is known about the knowledge and perception of bias among AI developers. ObjectiveThis study’s objective was to survey AI specialists in health care to investigate developers’ perceptions of bias in AI algorithms for health care applications and their awareness and use of preventative measures. MethodsA web-based survey was provided in both German and English language, comprising a maximum of 41 questions using branching logic within the REDCap web application. Only the results of participants with experience in the field of medical AI applications and complete questionnaires were included for analysis. Demographic data, technical expertise, and perceptions of fairness, as well as knowledge of biases in AI, were analyzed, and variations among gender, age, and work environment were assessed. ResultsA total of 151 AI specialists completed the web-based survey. The median age was 30 (IQR 26-39) years, and 67% (101/151) of respondents were male. One-third rated their AI development projects as fair (47/151, 31%) or moderately fair (51/151, 34%), 12% (18/151) reported their AI to be barely fair, and 1% (2/151) not fair at all. One participant identifying as diverse rated AI developments as barely fair, and among the 2 undefined gender participants, AI developments were rated as barely fair or moderately fair, respectively. Reasons for biases selected by respondents were lack of fair data (90/132, 68%), guidelines or recommendations (65/132, 49%), or knowledge (60/132, 45%). Half of the respondents worked with image data (83/151, 55%) from 1 center only (76/151, 50%), and 35% (53/151) worked with national data exclusively. ConclusionsThis study shows that the perception of biases in AI overall is moderately fair. Gender minorities did not once rate their AI development as fair or very fair. Therefore, further studies need to focus on minorities and women and their perceptions of AI. The results highlight the need to strengthen knowledge about bias in AI and provide guidelines on preventing biases in AI health care applications.
first_indexed 2024-03-12T12:37:32Z
format Article
id doaj.art-9ab27f7f99cf4d1b8116445b7d454d3f
institution Directory Open Access Journal
issn 1438-8871
language English
last_indexed 2024-03-12T12:37:32Z
publishDate 2023-06-01
publisher JMIR Publications
record_format Article
series Journal of Medical Internet Research
spelling doaj.art-9ab27f7f99cf4d1b8116445b7d454d3f2023-08-29T00:05:49ZengJMIR PublicationsJournal of Medical Internet Research1438-88712023-06-0125e4108910.2196/41089Artificial Intelligence Bias in Health Care: Web-Based SurveyCarina Nina Vorisekhttps://orcid.org/0000-0001-7499-8115Caroline Stellmachhttps://orcid.org/0000-0001-6798-8533Paula Josephine Mayerhttps://orcid.org/0000-0002-6441-5913Sophie Anne Ines Klopfensteinhttps://orcid.org/0000-0002-8470-2258Dominik Martin Bureshttps://orcid.org/0009-0005-5930-2120Anke Diehlhttps://orcid.org/0000-0002-7890-6107Maike Henningsenhttps://orcid.org/0009-0008-9826-4472Kerstin Ritterhttps://orcid.org/0000-0001-7115-0020Sylvia Thunhttps://orcid.org/0000-0002-3346-6806 BackgroundResources are increasingly spent on artificial intelligence (AI) solutions for medical applications aiming to improve diagnosis, treatment, and prevention of diseases. While the need for transparency and reduction of bias in data and algorithm development has been addressed in past studies, little is known about the knowledge and perception of bias among AI developers. ObjectiveThis study’s objective was to survey AI specialists in health care to investigate developers’ perceptions of bias in AI algorithms for health care applications and their awareness and use of preventative measures. MethodsA web-based survey was provided in both German and English language, comprising a maximum of 41 questions using branching logic within the REDCap web application. Only the results of participants with experience in the field of medical AI applications and complete questionnaires were included for analysis. Demographic data, technical expertise, and perceptions of fairness, as well as knowledge of biases in AI, were analyzed, and variations among gender, age, and work environment were assessed. ResultsA total of 151 AI specialists completed the web-based survey. The median age was 30 (IQR 26-39) years, and 67% (101/151) of respondents were male. One-third rated their AI development projects as fair (47/151, 31%) or moderately fair (51/151, 34%), 12% (18/151) reported their AI to be barely fair, and 1% (2/151) not fair at all. One participant identifying as diverse rated AI developments as barely fair, and among the 2 undefined gender participants, AI developments were rated as barely fair or moderately fair, respectively. Reasons for biases selected by respondents were lack of fair data (90/132, 68%), guidelines or recommendations (65/132, 49%), or knowledge (60/132, 45%). Half of the respondents worked with image data (83/151, 55%) from 1 center only (76/151, 50%), and 35% (53/151) worked with national data exclusively. ConclusionsThis study shows that the perception of biases in AI overall is moderately fair. Gender minorities did not once rate their AI development as fair or very fair. Therefore, further studies need to focus on minorities and women and their perceptions of AI. The results highlight the need to strengthen knowledge about bias in AI and provide guidelines on preventing biases in AI health care applications.https://www.jmir.org/2023/1/e41089
spellingShingle Carina Nina Vorisek
Caroline Stellmach
Paula Josephine Mayer
Sophie Anne Ines Klopfenstein
Dominik Martin Bures
Anke Diehl
Maike Henningsen
Kerstin Ritter
Sylvia Thun
Artificial Intelligence Bias in Health Care: Web-Based Survey
Journal of Medical Internet Research
title Artificial Intelligence Bias in Health Care: Web-Based Survey
title_full Artificial Intelligence Bias in Health Care: Web-Based Survey
title_fullStr Artificial Intelligence Bias in Health Care: Web-Based Survey
title_full_unstemmed Artificial Intelligence Bias in Health Care: Web-Based Survey
title_short Artificial Intelligence Bias in Health Care: Web-Based Survey
title_sort artificial intelligence bias in health care web based survey
url https://www.jmir.org/2023/1/e41089
work_keys_str_mv AT carinaninavorisek artificialintelligencebiasinhealthcarewebbasedsurvey
AT carolinestellmach artificialintelligencebiasinhealthcarewebbasedsurvey
AT paulajosephinemayer artificialintelligencebiasinhealthcarewebbasedsurvey
AT sophieanneinesklopfenstein artificialintelligencebiasinhealthcarewebbasedsurvey
AT dominikmartinbures artificialintelligencebiasinhealthcarewebbasedsurvey
AT ankediehl artificialintelligencebiasinhealthcarewebbasedsurvey
AT maikehenningsen artificialintelligencebiasinhealthcarewebbasedsurvey
AT kerstinritter artificialintelligencebiasinhealthcarewebbasedsurvey
AT sylviathun artificialintelligencebiasinhealthcarewebbasedsurvey