Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair
Machine learning (ML) models are increasingly being used for high-stake applications that can greatly impact people’s lives. Sometimes, these models can be biased toward certain social groups on the basis of race, gender, or ethnicity. Many prior works have attempted to mitigate this &...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Published: |
Multidisciplinary Digital Publishing Institute
2022
|
Online Access: | https://hdl.handle.net/1721.1/141366 |
_version_ | 1826210898257117184 |
---|---|
author | Singh, Arashdeep Singh, Jashandeep Khan, Ariba Gupta, Amar |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Singh, Arashdeep Singh, Jashandeep Khan, Ariba Gupta, Amar |
author_sort | Singh, Arashdeep |
collection | MIT |
description | Machine learning (ML) models are increasingly being used for high-stake applications that can greatly impact people’s lives. Sometimes, these models can be biased toward certain social groups on the basis of race, gender, or ethnicity. Many prior works have attempted to mitigate this “model discrimination” by updating the training data (pre-processing), altering the model learning process (in-processing), or manipulating the model output (post-processing). However, more work can be done in extending this situation to intersectional fairness, where we consider multiple sensitive parameters (e.g., race) and sensitive options (e.g., black or white), thus allowing for greater real-world usability. Prior work in fairness has also suffered from an accuracy–fairness trade-off that prevents both accuracy and fairness from being high. Moreover, the previous literature has not clearly presented holistic fairness metrics that work with intersectional fairness. In this paper, we address all three of these problems by (a) creating a bias mitigation technique called DualFair and (b) developing a new fairness metric (i.e., AWI, a measure of bias of an algorithm based upon inconsistent counterfactual predictions) that can handle intersectional fairness. Lastly, we test our novel mitigation method using a comprehensive U.S. mortgage lending dataset and show that our classifier, or fair loan predictor, obtains relatively high fairness and accuracy metrics. |
first_indexed | 2024-09-23T14:57:32Z |
format | Article |
id | mit-1721.1/141366 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T14:57:32Z |
publishDate | 2022 |
publisher | Multidisciplinary Digital Publishing Institute |
record_format | dspace |
spelling | mit-1721.1/1413662023-02-08T20:37:12Z Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair Singh, Arashdeep Singh, Jashandeep Khan, Ariba Gupta, Amar Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Machine learning (ML) models are increasingly being used for high-stake applications that can greatly impact people’s lives. Sometimes, these models can be biased toward certain social groups on the basis of race, gender, or ethnicity. Many prior works have attempted to mitigate this “model discrimination” by updating the training data (pre-processing), altering the model learning process (in-processing), or manipulating the model output (post-processing). However, more work can be done in extending this situation to intersectional fairness, where we consider multiple sensitive parameters (e.g., race) and sensitive options (e.g., black or white), thus allowing for greater real-world usability. Prior work in fairness has also suffered from an accuracy–fairness trade-off that prevents both accuracy and fairness from being high. Moreover, the previous literature has not clearly presented holistic fairness metrics that work with intersectional fairness. In this paper, we address all three of these problems by (a) creating a bias mitigation technique called DualFair and (b) developing a new fairness metric (i.e., AWI, a measure of bias of an algorithm based upon inconsistent counterfactual predictions) that can handle intersectional fairness. Lastly, we test our novel mitigation method using a comprehensive U.S. mortgage lending dataset and show that our classifier, or fair loan predictor, obtains relatively high fairness and accuracy metrics. 2022-03-24T18:56:48Z 2022-03-24T18:56:48Z 2022-03-12 2022-03-24T14:46:48Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/141366 Machine Learning and Knowledge Extraction 4 (1): 240-253 (2022) PUBLISHER_CC http://dx.doi.org/10.3390/make4010011 Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/ application/pdf Multidisciplinary Digital Publishing Institute Multidisciplinary Digital Publishing Institute |
spellingShingle | Singh, Arashdeep Singh, Jashandeep Khan, Ariba Gupta, Amar Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair |
title | Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair |
title_full | Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair |
title_fullStr | Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair |
title_full_unstemmed | Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair |
title_short | Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair |
title_sort | developing a novel fair loan classifier through a multi sensitive debiasing pipeline dualfair |
url | https://hdl.handle.net/1721.1/141366 |
work_keys_str_mv | AT singharashdeep developinganovelfairloanclassifierthroughamultisensitivedebiasingpipelinedualfair AT singhjashandeep developinganovelfairloanclassifierthroughamultisensitivedebiasingpipelinedualfair AT khanariba developinganovelfairloanclassifierthroughamultisensitivedebiasingpipelinedualfair AT guptaamar developinganovelfairloanclassifierthroughamultisensitivedebiasingpipelinedualfair |