Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics

AbstractMeasuring bias is key for better understanding and addressing unfairness in NLP/ML models. This is often done via fairness metrics, which quantify the differences in a model’s behaviour across a range of demographic groups. In this work, we shed more light on the differences...

Full description

Bibliographic Details
Main Authors: Paula Czarnowska, Yogarshi Vyas, Kashif Shah
Format: Article
Language:English
Published: The MIT Press 2021-01-01
Series:Transactions of the Association for Computational Linguistics
Online Access:https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00425/108201/Quantifying-Social-Biases-in-NLP-A-Generalization
_version_ 1818441464112218112
author Paula Czarnowska
Yogarshi Vyas
Kashif Shah
author_facet Paula Czarnowska
Yogarshi Vyas
Kashif Shah
author_sort Paula Czarnowska
collection DOAJ
description AbstractMeasuring bias is key for better understanding and addressing unfairness in NLP/ML models. This is often done via fairness metrics, which quantify the differences in a model’s behaviour across a range of demographic groups. In this work, we shed more light on the differences and similarities between the fairness metrics used in NLP. First, we unify a broad range of existing metrics under three generalized fairness metrics, revealing the connections between them. Next, we carry out an extensive empirical comparison of existing metrics and demonstrate that the observed differences in bias measurement can be systematically explained via differences in parameter choices for our generalized metrics.
first_indexed 2024-12-14T18:28:40Z
format Article
id doaj.art-d2acc8e1928e489f8e4ff71b5ee6cc52
institution Directory Open Access Journal
issn 2307-387X
language English
last_indexed 2024-12-14T18:28:40Z
publishDate 2021-01-01
publisher The MIT Press
record_format Article
series Transactions of the Association for Computational Linguistics
spelling doaj.art-d2acc8e1928e489f8e4ff71b5ee6cc522022-12-21T22:51:51ZengThe MIT PressTransactions of the Association for Computational Linguistics2307-387X2021-01-0191249126710.1162/tacl_a_00425Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness MetricsPaula Czarnowska0Yogarshi Vyas1Kashif Shah2University of Cambridge, UK. pjc211@cam.ac.ukAmazon AI, USA. yogarshi@amazon.comAmazon AI, USA. shahkas@amazon.com AbstractMeasuring bias is key for better understanding and addressing unfairness in NLP/ML models. This is often done via fairness metrics, which quantify the differences in a model’s behaviour across a range of demographic groups. In this work, we shed more light on the differences and similarities between the fairness metrics used in NLP. First, we unify a broad range of existing metrics under three generalized fairness metrics, revealing the connections between them. Next, we carry out an extensive empirical comparison of existing metrics and demonstrate that the observed differences in bias measurement can be systematically explained via differences in parameter choices for our generalized metrics.https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00425/108201/Quantifying-Social-Biases-in-NLP-A-Generalization
spellingShingle Paula Czarnowska
Yogarshi Vyas
Kashif Shah
Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
Transactions of the Association for Computational Linguistics
title Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
title_full Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
title_fullStr Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
title_full_unstemmed Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
title_short Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
title_sort quantifying social biases in nlp a generalization and empirical comparison of extrinsic fairness metrics
url https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00425/108201/Quantifying-Social-Biases-in-NLP-A-Generalization
work_keys_str_mv AT paulaczarnowska quantifyingsocialbiasesinnlpageneralizationandempiricalcomparisonofextrinsicfairnessmetrics
AT yogarshivyas quantifyingsocialbiasesinnlpageneralizationandempiricalcomparisonofextrinsicfairnessmetrics
AT kashifshah quantifyingsocialbiasesinnlpageneralizationandempiricalcomparisonofextrinsicfairnessmetrics