Addressing fairness in artificial intelligence for medical imaging
A plethora of work has shown that AI systems can systematically and unfairly be biased against certain populations in multiple scenarios. The field of medical imaging, where AI systems are beginning to be increasingly adopted, is no exception. Here we discuss the meaning of fairness in this area and...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2022-08-01
|
Series: | Nature Communications |
Online Access: | https://doi.org/10.1038/s41467-022-32186-3 |
_version_ | 1811315264703168512 |
---|---|
author | María Agustina Ricci Lara Rodrigo Echeveste Enzo Ferrante |
author_facet | María Agustina Ricci Lara Rodrigo Echeveste Enzo Ferrante |
author_sort | María Agustina Ricci Lara |
collection | DOAJ |
description | A plethora of work has shown that AI systems can systematically and unfairly be biased against certain populations in multiple scenarios. The field of medical imaging, where AI systems are beginning to be increasingly adopted, is no exception. Here we discuss the meaning of fairness in this area and comment on the potential sources of biases, as well as the strategies available to mitigate them. Finally, we analyze the current state of the field, identifying strengths and highlighting areas of vacancy, challenges and opportunities that lie ahead. |
first_indexed | 2024-04-13T11:27:11Z |
format | Article |
id | doaj.art-20433615daa94b73b43e4d281d940201 |
institution | Directory Open Access Journal |
issn | 2041-1723 |
language | English |
last_indexed | 2024-04-13T11:27:11Z |
publishDate | 2022-08-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Nature Communications |
spelling | doaj.art-20433615daa94b73b43e4d281d9402012022-12-22T02:48:40ZengNature PortfolioNature Communications2041-17232022-08-011311610.1038/s41467-022-32186-3Addressing fairness in artificial intelligence for medical imagingMaría Agustina Ricci Lara0Rodrigo Echeveste1Enzo Ferrante2Health Informatics Department, Hospital Italiano de Buenos AiresResearch Institute for Signals, Systems and Computational Intelligence sinc(i) (FICH-UNL/CONICET)Research Institute for Signals, Systems and Computational Intelligence sinc(i) (FICH-UNL/CONICET)A plethora of work has shown that AI systems can systematically and unfairly be biased against certain populations in multiple scenarios. The field of medical imaging, where AI systems are beginning to be increasingly adopted, is no exception. Here we discuss the meaning of fairness in this area and comment on the potential sources of biases, as well as the strategies available to mitigate them. Finally, we analyze the current state of the field, identifying strengths and highlighting areas of vacancy, challenges and opportunities that lie ahead.https://doi.org/10.1038/s41467-022-32186-3 |
spellingShingle | María Agustina Ricci Lara Rodrigo Echeveste Enzo Ferrante Addressing fairness in artificial intelligence for medical imaging Nature Communications |
title | Addressing fairness in artificial intelligence for medical imaging |
title_full | Addressing fairness in artificial intelligence for medical imaging |
title_fullStr | Addressing fairness in artificial intelligence for medical imaging |
title_full_unstemmed | Addressing fairness in artificial intelligence for medical imaging |
title_short | Addressing fairness in artificial intelligence for medical imaging |
title_sort | addressing fairness in artificial intelligence for medical imaging |
url | https://doi.org/10.1038/s41467-022-32186-3 |
work_keys_str_mv | AT mariaagustinariccilara addressingfairnessinartificialintelligenceformedicalimaging AT rodrigoecheveste addressingfairnessinartificialintelligenceformedicalimaging AT enzoferrante addressingfairnessinartificialintelligenceformedicalimaging |