Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
AbstractMeasuring bias is key for better understanding and addressing unfairness in NLP/ML models. This is often done via fairness metrics, which quantify the differences in a model’s behaviour across a range of demographic groups. In this work, we shed more light on the differences...
Main Authors: | Paula Czarnowska, Yogarshi Vyas, Kashif Shah |
---|---|
Format: | Article |
Language: | English |
Published: |
The MIT Press
2021-01-01
|
Series: | Transactions of the Association for Computational Linguistics |
Online Access: | https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00425/108201/Quantifying-Social-Biases-in-NLP-A-Generalization |
Similar Items
-
A Survey on Bias in Deep NLP
by: Ismael Garrido-Muñoz , et al.
Published: (2021-04-01) -
Extrinsic Temporal Metrics
by: Skow, Bradford
Published: (2012) -
GENERAL ASPECTS OF NLP IN TEACHING LANGUAGES
by: Clementina Niţă, et al.
Published: (2011-11-01) -
nlp /
by: Bavister, Steve, author 555926, et al.
Published: (2004) -
An Empirical Survey of Data Augmentation for Limited Data Learning in NLP
by: Jiaao Chen, et al.
Published: (2023-01-01)