(Predictable) performance bias in unsupervised anomaly detectionResearch in context

Summary: Background: With the ever-increasing amount of medical imaging data, the demand for algorithms to assist clinicians has amplified. Unsupervised anomaly detection (UAD) models promise to aid in the crucial first step of disease detection. While previous studies have thoroughly explored fair...

Full description

Bibliographic Details
Main Authors: Felix Meissen, Svenja Breuer, Moritz Knolle, Alena Buyx, Ruth Müller, Georgios Kaissis, Benedikt Wiestler, Daniel Rückert
Format: Article
Language:English
Published: Elsevier 2024-03-01
Series:EBioMedicine
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2352396424000379
_version_ 1797317482280648704
author Felix Meissen
Svenja Breuer
Moritz Knolle
Alena Buyx
Ruth Müller
Georgios Kaissis
Benedikt Wiestler
Daniel Rückert
author_facet Felix Meissen
Svenja Breuer
Moritz Knolle
Alena Buyx
Ruth Müller
Georgios Kaissis
Benedikt Wiestler
Daniel Rückert
author_sort Felix Meissen
collection DOAJ
description Summary: Background: With the ever-increasing amount of medical imaging data, the demand for algorithms to assist clinicians has amplified. Unsupervised anomaly detection (UAD) models promise to aid in the crucial first step of disease detection. While previous studies have thoroughly explored fairness in supervised models in healthcare, for UAD, this has so far been unexplored. Methods: In this study, we evaluated how dataset composition regarding subgroups manifests in disparate performance of UAD models along multiple protected variables on three large-scale publicly available chest X-ray datasets. Our experiments were validated using two state-of-the-art UAD models for medical images. Finally, we introduced subgroup-AUROC (sAUROC), which aids in quantifying fairness in machine learning. Findings: Our experiments revealed empirical “fairness laws” (similar to “scaling laws” for Transformers) for training-dataset composition: Linear relationships between anomaly detection performance within a subpopulation and its representation in the training data. Our study further revealed performance disparities, even in the case of balanced training data, and compound effects that exacerbate the drop in performance for subjects associated with multiple adversely affected groups. Interpretation: Our study quantified the disparate performance of UAD models against certain demographic subgroups. Importantly, we showed that this unfairness cannot be mitigated by balanced representation alone. Instead, the representation of some subgroups seems harder to learn by UAD models than that of others. The empirical “fairness laws” discovered in our study make disparate performance in UAD models easier to estimate and aid in determining the most desirable dataset composition. Funding: European Research Council Deep4MI.
first_indexed 2024-03-08T03:35:40Z
format Article
id doaj.art-d62efed5f9a843a4a8bf244eda853221
institution Directory Open Access Journal
issn 2352-3964
language English
last_indexed 2024-03-08T03:35:40Z
publishDate 2024-03-01
publisher Elsevier
record_format Article
series EBioMedicine
spelling doaj.art-d62efed5f9a843a4a8bf244eda8532212024-02-10T04:44:40ZengElsevierEBioMedicine2352-39642024-03-01101105002(Predictable) performance bias in unsupervised anomaly detectionResearch in contextFelix Meissen0Svenja Breuer1Moritz Knolle2Alena Buyx3Ruth Müller4Georgios Kaissis5Benedikt Wiestler6Daniel Rückert7Chair for AI in Healthcare and Medicine, Klinikum rechts der Isar der Technischen Universität München, Einsteinstr. 25, Munich, 81675, Germany; Corresponding author.Department of Science, Technology and Society, School of Social Sciences and Technology, and Technical University of Munich, Arcisstr. 21, Munich, 80333, Germany; Department of Economics and Policy, School of Management, Technical University of Munich, Arcisstraße 21, 80333, Munich, GermanyChair for AI in Healthcare and Medicine, Klinikum rechts der Isar der Technischen Universität München, Einsteinstr. 25, Munich, 81675, Germany; Konrad Zuse School of Excellence in Reliable AI, Munich Data Science Institute (MDSI), Walther-von-Dyck-Str. 10, Garching, 85748, GermanyDepartment of Science, Technology and Society, School of Social Sciences and Technology, and Technical University of Munich, Arcisstr. 21, Munich, 80333, Germany; Institute for History and Ethics of Medicine, School of Medicine, Technical University of Munich, Prinzregentenstraße 68, Munich, 81675, GermanyDepartment of Science, Technology and Society, School of Social Sciences and Technology, and Technical University of Munich, Arcisstr. 21, Munich, 80333, Germany; Department of Economics and Policy, School of Management, Technical University of Munich, Arcisstraße 21, 80333, Munich, GermanyChair for AI in Healthcare and Medicine, Klinikum rechts der Isar der Technischen Universität München, Einsteinstr. 25, Munich, 81675, Germany; Institute for Machine Learning in Biomedical Imaging, Helmholtz Munich, Ingolstädter Landstraße 1, 85764, Neuherberg, Germany; Department of Computing, Imperial College London, London, SW7 2AZ, UKDepartment of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, Ismaninger Str. 22, Munich, 81675, Germany; TranslaTUM, Center for Translational Cancer Research, Technical University of Munich, Ismaninger Str. 22, Munich, 81675, GermanyChair for AI in Healthcare and Medicine, Klinikum rechts der Isar der Technischen Universität München, Einsteinstr. 25, Munich, 81675, Germany; Department of Computing, Imperial College London, London, SW7 2AZ, UKSummary: Background: With the ever-increasing amount of medical imaging data, the demand for algorithms to assist clinicians has amplified. Unsupervised anomaly detection (UAD) models promise to aid in the crucial first step of disease detection. While previous studies have thoroughly explored fairness in supervised models in healthcare, for UAD, this has so far been unexplored. Methods: In this study, we evaluated how dataset composition regarding subgroups manifests in disparate performance of UAD models along multiple protected variables on three large-scale publicly available chest X-ray datasets. Our experiments were validated using two state-of-the-art UAD models for medical images. Finally, we introduced subgroup-AUROC (sAUROC), which aids in quantifying fairness in machine learning. Findings: Our experiments revealed empirical “fairness laws” (similar to “scaling laws” for Transformers) for training-dataset composition: Linear relationships between anomaly detection performance within a subpopulation and its representation in the training data. Our study further revealed performance disparities, even in the case of balanced training data, and compound effects that exacerbate the drop in performance for subjects associated with multiple adversely affected groups. Interpretation: Our study quantified the disparate performance of UAD models against certain demographic subgroups. Importantly, we showed that this unfairness cannot be mitigated by balanced representation alone. Instead, the representation of some subgroups seems harder to learn by UAD models than that of others. The empirical “fairness laws” discovered in our study make disparate performance in UAD models easier to estimate and aid in determining the most desirable dataset composition. Funding: European Research Council Deep4MI.http://www.sciencedirect.com/science/article/pii/S2352396424000379Artificial intelligenceMachine learningAlgorithmic biasSubgroup disparitiesAnomaly detection
spellingShingle Felix Meissen
Svenja Breuer
Moritz Knolle
Alena Buyx
Ruth Müller
Georgios Kaissis
Benedikt Wiestler
Daniel Rückert
(Predictable) performance bias in unsupervised anomaly detectionResearch in context
EBioMedicine
Artificial intelligence
Machine learning
Algorithmic bias
Subgroup disparities
Anomaly detection
title (Predictable) performance bias in unsupervised anomaly detectionResearch in context
title_full (Predictable) performance bias in unsupervised anomaly detectionResearch in context
title_fullStr (Predictable) performance bias in unsupervised anomaly detectionResearch in context
title_full_unstemmed (Predictable) performance bias in unsupervised anomaly detectionResearch in context
title_short (Predictable) performance bias in unsupervised anomaly detectionResearch in context
title_sort predictable performance bias in unsupervised anomaly detectionresearch in context
topic Artificial intelligence
Machine learning
Algorithmic bias
Subgroup disparities
Anomaly detection
url http://www.sciencedirect.com/science/article/pii/S2352396424000379
work_keys_str_mv AT felixmeissen predictableperformancebiasinunsupervisedanomalydetectionresearchincontext
AT svenjabreuer predictableperformancebiasinunsupervisedanomalydetectionresearchincontext
AT moritzknolle predictableperformancebiasinunsupervisedanomalydetectionresearchincontext
AT alenabuyx predictableperformancebiasinunsupervisedanomalydetectionresearchincontext
AT ruthmuller predictableperformancebiasinunsupervisedanomalydetectionresearchincontext
AT georgioskaissis predictableperformancebiasinunsupervisedanomalydetectionresearchincontext
AT benediktwiestler predictableperformancebiasinunsupervisedanomalydetectionresearchincontext
AT danielruckert predictableperformancebiasinunsupervisedanomalydetectionresearchincontext