An adversarial training framework for mitigating algorithmic biases in clinical machine learning
<p>Machine learning is becoming increasingly prominent in healthcare. Although its benefits are clear, growing attention is being given to how these tools may exacerbate existing biases and disparities. In this study, we introduce an adversarial training framework that is capable of mitigating...
Main Authors: | , , , , |
---|---|
Format: | Journal article |
Language: | English |
Published: |
Springer Nature
2023
|
_version_ | 1797110214572376064 |
---|---|
author | Yang, J Soltan, AAS Eyre, DW Yang, Y Clifton, DA |
author_facet | Yang, J Soltan, AAS Eyre, DW Yang, Y Clifton, DA |
author_sort | Yang, J |
collection | OXFORD |
description | <p>Machine learning is becoming increasingly prominent in healthcare. Although its benefits are clear, growing attention is being given to how these tools may exacerbate existing biases and disparities. In this study, we introduce an adversarial training framework that is capable of mitigating biases that may have been acquired through data collection. We demonstrate this proposed framework on the real-world task of rapidly predicting COVID-19, and focus on mitigating site-specific (hospital) and demographic (ethnicity) biases. Using the statistical definition of equalized odds, we show that adversarial training improves outcome fairness, while still achieving clinically-effective screening performances (negative predictive values >0.98). We compare our method to previous benchmarks, and perform prospective and external validation across four independent hospital cohorts. Our method can be generalized to any outcomes, models, and definitions of fairness.</p> |
first_indexed | 2024-03-07T07:52:05Z |
format | Journal article |
id | oxford-uuid:bdff7a67-28d6-4174-ad13-0f4f3d0dca32 |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-07T07:52:05Z |
publishDate | 2023 |
publisher | Springer Nature |
record_format | dspace |
spelling | oxford-uuid:bdff7a67-28d6-4174-ad13-0f4f3d0dca322023-07-24T10:18:04ZAn adversarial training framework for mitigating algorithmic biases in clinical machine learningJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:bdff7a67-28d6-4174-ad13-0f4f3d0dca32EnglishSymplectic ElementsSpringer Nature2023Yang, JSoltan, AASEyre, DWYang, YClifton, DA<p>Machine learning is becoming increasingly prominent in healthcare. Although its benefits are clear, growing attention is being given to how these tools may exacerbate existing biases and disparities. In this study, we introduce an adversarial training framework that is capable of mitigating biases that may have been acquired through data collection. We demonstrate this proposed framework on the real-world task of rapidly predicting COVID-19, and focus on mitigating site-specific (hospital) and demographic (ethnicity) biases. Using the statistical definition of equalized odds, we show that adversarial training improves outcome fairness, while still achieving clinically-effective screening performances (negative predictive values >0.98). We compare our method to previous benchmarks, and perform prospective and external validation across four independent hospital cohorts. Our method can be generalized to any outcomes, models, and definitions of fairness.</p> |
spellingShingle | Yang, J Soltan, AAS Eyre, DW Yang, Y Clifton, DA An adversarial training framework for mitigating algorithmic biases in clinical machine learning |
title | An adversarial training framework for mitigating algorithmic biases in clinical machine learning |
title_full | An adversarial training framework for mitigating algorithmic biases in clinical machine learning |
title_fullStr | An adversarial training framework for mitigating algorithmic biases in clinical machine learning |
title_full_unstemmed | An adversarial training framework for mitigating algorithmic biases in clinical machine learning |
title_short | An adversarial training framework for mitigating algorithmic biases in clinical machine learning |
title_sort | adversarial training framework for mitigating algorithmic biases in clinical machine learning |
work_keys_str_mv | AT yangj anadversarialtrainingframeworkformitigatingalgorithmicbiasesinclinicalmachinelearning AT soltanaas anadversarialtrainingframeworkformitigatingalgorithmicbiasesinclinicalmachinelearning AT eyredw anadversarialtrainingframeworkformitigatingalgorithmicbiasesinclinicalmachinelearning AT yangy anadversarialtrainingframeworkformitigatingalgorithmicbiasesinclinicalmachinelearning AT cliftonda anadversarialtrainingframeworkformitigatingalgorithmicbiasesinclinicalmachinelearning AT yangj adversarialtrainingframeworkformitigatingalgorithmicbiasesinclinicalmachinelearning AT soltanaas adversarialtrainingframeworkformitigatingalgorithmicbiasesinclinicalmachinelearning AT eyredw adversarialtrainingframeworkformitigatingalgorithmicbiasesinclinicalmachinelearning AT yangy adversarialtrainingframeworkformitigatingalgorithmicbiasesinclinicalmachinelearning AT cliftonda adversarialtrainingframeworkformitigatingalgorithmicbiasesinclinicalmachinelearning |