Algorithmic encoding of protected characteristics in chest X-ray disease detection modelsResearch in context

Summary: Background: It has been rightfully emphasized that the use of AI for clinical decision making could amplify health disparities. An algorithm may encode protected characteristics, and then use this information for making predictions due to undesirable correlations in the (historical) traini...

Full description

Bibliographic Details
Main Authors: Ben Glocker, Charles Jones, Mélanie Bernhardt, Stefan Winzeck
Format: Article
Language:English
Published: Elsevier 2023-03-01
Series:EBioMedicine
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2352396423000324
_version_ 1811164938612244480
author Ben Glocker
Charles Jones
Mélanie Bernhardt
Stefan Winzeck
author_facet Ben Glocker
Charles Jones
Mélanie Bernhardt
Stefan Winzeck
author_sort Ben Glocker
collection DOAJ
description Summary: Background: It has been rightfully emphasized that the use of AI for clinical decision making could amplify health disparities. An algorithm may encode protected characteristics, and then use this information for making predictions due to undesirable correlations in the (historical) training data. It remains unclear how we can establish whether such information is actually used. Besides the scarcity of data from underserved populations, very little is known about how dataset biases manifest in predictive models and how this may result in disparate performance. This article aims to shed some light on these issues by exploring methodology for subgroup analysis in image-based disease detection models. Methods: We utilize two publicly available chest X-ray datasets, CheXpert and MIMIC-CXR, to study performance disparities across race and biological sex in deep learning models. We explore test set resampling, transfer learning, multitask learning, and model inspection to assess the relationship between the encoding of protected characteristics and disease detection performance across subgroups. Findings: We confirm subgroup disparities in terms of shifted true and false positive rates which are partially removed after correcting for population and prevalence shifts in the test sets. We find that transfer learning alone is insufficient for establishing whether specific patient information is used for making predictions. The proposed combination of test-set resampling, multitask learning, and model inspection reveals valuable insights about the way protected characteristics are encoded in the feature representations of deep neural networks. Interpretation: Subgroup analysis is key for identifying performance disparities of AI models, but statistical differences across subgroups need to be taken into account when analyzing potential biases in disease detection. The proposed methodology provides a comprehensive framework for subgroup analysis enabling further research into the underlying causes of disparities. Funding: European Research Council Horizon 2020, UK Research and Innovation.
first_indexed 2024-04-10T15:30:28Z
format Article
id doaj.art-a3d3aada74b84787b0ef22431a38074c
institution Directory Open Access Journal
issn 2352-3964
language English
last_indexed 2024-04-10T15:30:28Z
publishDate 2023-03-01
publisher Elsevier
record_format Article
series EBioMedicine
spelling doaj.art-a3d3aada74b84787b0ef22431a38074c2023-02-14T04:06:52ZengElsevierEBioMedicine2352-39642023-03-0189104467Algorithmic encoding of protected characteristics in chest X-ray disease detection modelsResearch in contextBen Glocker0Charles Jones1Mélanie Bernhardt2Stefan Winzeck3Corresponding author.; Department of Computing, Imperial College London, London, SW7 2AZ, UKDepartment of Computing, Imperial College London, London, SW7 2AZ, UKDepartment of Computing, Imperial College London, London, SW7 2AZ, UKDepartment of Computing, Imperial College London, London, SW7 2AZ, UKSummary: Background: It has been rightfully emphasized that the use of AI for clinical decision making could amplify health disparities. An algorithm may encode protected characteristics, and then use this information for making predictions due to undesirable correlations in the (historical) training data. It remains unclear how we can establish whether such information is actually used. Besides the scarcity of data from underserved populations, very little is known about how dataset biases manifest in predictive models and how this may result in disparate performance. This article aims to shed some light on these issues by exploring methodology for subgroup analysis in image-based disease detection models. Methods: We utilize two publicly available chest X-ray datasets, CheXpert and MIMIC-CXR, to study performance disparities across race and biological sex in deep learning models. We explore test set resampling, transfer learning, multitask learning, and model inspection to assess the relationship between the encoding of protected characteristics and disease detection performance across subgroups. Findings: We confirm subgroup disparities in terms of shifted true and false positive rates which are partially removed after correcting for population and prevalence shifts in the test sets. We find that transfer learning alone is insufficient for establishing whether specific patient information is used for making predictions. The proposed combination of test-set resampling, multitask learning, and model inspection reveals valuable insights about the way protected characteristics are encoded in the feature representations of deep neural networks. Interpretation: Subgroup analysis is key for identifying performance disparities of AI models, but statistical differences across subgroups need to be taken into account when analyzing potential biases in disease detection. The proposed methodology provides a comprehensive framework for subgroup analysis enabling further research into the underlying causes of disparities. Funding: European Research Council Horizon 2020, UK Research and Innovation.http://www.sciencedirect.com/science/article/pii/S2352396423000324Artificial intelligenceImage-based disease detectionAlgorithmic biasSubgroup disparities
spellingShingle Ben Glocker
Charles Jones
Mélanie Bernhardt
Stefan Winzeck
Algorithmic encoding of protected characteristics in chest X-ray disease detection modelsResearch in context
EBioMedicine
Artificial intelligence
Image-based disease detection
Algorithmic bias
Subgroup disparities
title Algorithmic encoding of protected characteristics in chest X-ray disease detection modelsResearch in context
title_full Algorithmic encoding of protected characteristics in chest X-ray disease detection modelsResearch in context
title_fullStr Algorithmic encoding of protected characteristics in chest X-ray disease detection modelsResearch in context
title_full_unstemmed Algorithmic encoding of protected characteristics in chest X-ray disease detection modelsResearch in context
title_short Algorithmic encoding of protected characteristics in chest X-ray disease detection modelsResearch in context
title_sort algorithmic encoding of protected characteristics in chest x ray disease detection modelsresearch in context
topic Artificial intelligence
Image-based disease detection
Algorithmic bias
Subgroup disparities
url http://www.sciencedirect.com/science/article/pii/S2352396423000324
work_keys_str_mv AT benglocker algorithmicencodingofprotectedcharacteristicsinchestxraydiseasedetectionmodelsresearchincontext
AT charlesjones algorithmicencodingofprotectedcharacteristicsinchestxraydiseasedetectionmodelsresearchincontext
AT melaniebernhardt algorithmicencodingofprotectedcharacteristicsinchestxraydiseasedetectionmodelsresearchincontext
AT stefanwinzeck algorithmicencodingofprotectedcharacteristicsinchestxraydiseasedetectionmodelsresearchincontext