Bias detection by using name disparity tables across protected groups
As AI-based models take an increasingly central role in our lives, so does the concern for fairness. In recent years, mounting evidence reveals how vulnerable AI models are to bias and the challenges involved in detection and mitigation. Our contribution is three-fold. Firstly, we gather name dispar...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2022-04-01
|
Series: | Journal of Responsible Technology |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2666659621000135 |
_version_ | 1818897315520315392 |
---|---|
author | Elhanan Mishraky Aviv Ben Arie Yair Horesh Shir Meir Lador |
author_facet | Elhanan Mishraky Aviv Ben Arie Yair Horesh Shir Meir Lador |
author_sort | Elhanan Mishraky |
collection | DOAJ |
description | As AI-based models take an increasingly central role in our lives, so does the concern for fairness. In recent years, mounting evidence reveals how vulnerable AI models are to bias and the challenges involved in detection and mitigation. Our contribution is three-fold. Firstly, we gather name disparity tables across protected groups, allowing us to estimate sensitive attributes (gender, race). Using these estimates, we compute bias metrics given a classification model’s predictions. We leverage only names/zip codes; hence, our method is model and feature agnostic. Secondly, we offer an open-source Python package that produces a bias detection report based on our method. Finally, we demonstrate that names of older individuals are better predictors of race and gender and that double surnames are a reasonable predictor of gender. We tested our method on publicly available datasets (US Congress) and classifiers (COMPAS) and found it to be consistent with them. |
first_indexed | 2024-12-19T19:14:13Z |
format | Article |
id | doaj.art-084d00981566410d8e38901510bc1839 |
institution | Directory Open Access Journal |
issn | 2666-6596 |
language | English |
last_indexed | 2024-12-19T19:14:13Z |
publishDate | 2022-04-01 |
publisher | Elsevier |
record_format | Article |
series | Journal of Responsible Technology |
spelling | doaj.art-084d00981566410d8e38901510bc18392022-12-21T20:09:11ZengElsevierJournal of Responsible Technology2666-65962022-04-019100020Bias detection by using name disparity tables across protected groupsElhanan Mishraky0Aviv Ben Arie1Yair Horesh2Shir Meir Lador3Corresponding author; Intuit Inc., 2700 Coast Ave Mountain View, CA, USAIntuit Inc., 2700 Coast Ave Mountain View, CA, USAIntuit Inc., 2700 Coast Ave Mountain View, CA, USAIntuit Inc., 2700 Coast Ave Mountain View, CA, USAAs AI-based models take an increasingly central role in our lives, so does the concern for fairness. In recent years, mounting evidence reveals how vulnerable AI models are to bias and the challenges involved in detection and mitigation. Our contribution is three-fold. Firstly, we gather name disparity tables across protected groups, allowing us to estimate sensitive attributes (gender, race). Using these estimates, we compute bias metrics given a classification model’s predictions. We leverage only names/zip codes; hence, our method is model and feature agnostic. Secondly, we offer an open-source Python package that produces a bias detection report based on our method. Finally, we demonstrate that names of older individuals are better predictors of race and gender and that double surnames are a reasonable predictor of gender. We tested our method on publicly available datasets (US Congress) and classifiers (COMPAS) and found it to be consistent with them.http://www.sciencedirect.com/science/article/pii/S2666659621000135Fairness in AIProtected groupsMachine bias detectionOpen-source |
spellingShingle | Elhanan Mishraky Aviv Ben Arie Yair Horesh Shir Meir Lador Bias detection by using name disparity tables across protected groups Journal of Responsible Technology Fairness in AI Protected groups Machine bias detection Open-source |
title | Bias detection by using name disparity tables across protected groups |
title_full | Bias detection by using name disparity tables across protected groups |
title_fullStr | Bias detection by using name disparity tables across protected groups |
title_full_unstemmed | Bias detection by using name disparity tables across protected groups |
title_short | Bias detection by using name disparity tables across protected groups |
title_sort | bias detection by using name disparity tables across protected groups |
topic | Fairness in AI Protected groups Machine bias detection Open-source |
url | http://www.sciencedirect.com/science/article/pii/S2666659621000135 |
work_keys_str_mv | AT elhananmishraky biasdetectionbyusingnamedisparitytablesacrossprotectedgroups AT avivbenarie biasdetectionbyusingnamedisparitytablesacrossprotectedgroups AT yairhoresh biasdetectionbyusingnamedisparitytablesacrossprotectedgroups AT shirmeirlador biasdetectionbyusingnamedisparitytablesacrossprotectedgroups |