Active Fairness in Algorithmic Decision Making

© 2019 Copyright held by the owner/author(s). Society increasingly relies on machine learning models for automated decision making. Yet, efficiency gains from automation have come paired with concern for algorithmic discrimination that can systematize inequality. Recent work has proposed optimal pos...

Full description

Bibliographic Details
Main Authors: Noriega-Campero, Alejandro, Bakker, Michiel A, Garcia-Bulle, Bernardo, Pentland, Alex 'Sandy'
Other Authors: Massachusetts Institute of Technology. Media Laboratory
Format: Article
Language:English
Published: Association for Computing Machinery (ACM) 2021
Online Access:https://hdl.handle.net/1721.1/137087
_version_ 1811077605392121856
author Noriega-Campero, Alejandro
Bakker, Michiel A
Garcia-Bulle, Bernardo
Pentland, Alex 'Sandy'
author2 Massachusetts Institute of Technology. Media Laboratory
author_facet Massachusetts Institute of Technology. Media Laboratory
Noriega-Campero, Alejandro
Bakker, Michiel A
Garcia-Bulle, Bernardo
Pentland, Alex 'Sandy'
author_sort Noriega-Campero, Alejandro
collection MIT
description © 2019 Copyright held by the owner/author(s). Society increasingly relies on machine learning models for automated decision making. Yet, efficiency gains from automation have come paired with concern for algorithmic discrimination that can systematize inequality. Recent work has proposed optimal post-processing methods that randomize classification decisions for a fraction of individuals, in order to achieve fairness measures related to parity in errors and calibration. These methods, however, have raised concern due to the information inefficiency, intra-group unfairness, and Pareto sub-optimality they entail. The present work proposes an alternative active framework for fair classification, where, in deployment, a decision-maker adaptively acquires information according to the needs of different groups or individuals, towards balancing disparities in classification performance. We propose two such methods, where information collection is adapted to group- and individual-level needs respectively. We show on real-world datasets that these can achieve: 1) calibration and single error parity (e.g., equal opportunity); and 2) parity in both false positive and false negative rates (i.e., equal odds). Moreover, we show that by leveraging their additional degree of freedom, active approaches can substantially outperform randomization-based classifiers previously considered optimal, while avoiding limitations such as intra-group unfairness.
first_indexed 2024-09-23T10:45:40Z
format Article
id mit-1721.1/137087
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T10:45:40Z
publishDate 2021
publisher Association for Computing Machinery (ACM)
record_format dspace
spelling mit-1721.1/1370872023-02-08T19:13:20Z Active Fairness in Algorithmic Decision Making Noriega-Campero, Alejandro Bakker, Michiel A Garcia-Bulle, Bernardo Pentland, Alex 'Sandy' Massachusetts Institute of Technology. Media Laboratory © 2019 Copyright held by the owner/author(s). Society increasingly relies on machine learning models for automated decision making. Yet, efficiency gains from automation have come paired with concern for algorithmic discrimination that can systematize inequality. Recent work has proposed optimal post-processing methods that randomize classification decisions for a fraction of individuals, in order to achieve fairness measures related to parity in errors and calibration. These methods, however, have raised concern due to the information inefficiency, intra-group unfairness, and Pareto sub-optimality they entail. The present work proposes an alternative active framework for fair classification, where, in deployment, a decision-maker adaptively acquires information according to the needs of different groups or individuals, towards balancing disparities in classification performance. We propose two such methods, where information collection is adapted to group- and individual-level needs respectively. We show on real-world datasets that these can achieve: 1) calibration and single error parity (e.g., equal opportunity); and 2) parity in both false positive and false negative rates (i.e., equal odds). Moreover, we show that by leveraging their additional degree of freedom, active approaches can substantially outperform randomization-based classifiers previously considered optimal, while avoiding limitations such as intra-group unfairness. 2021-11-02T14:16:36Z 2021-11-02T14:16:36Z 2019 2021-06-30T18:25:23Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/137087 Noriega-Campero, Alejandro, Bakker, Michiel A, Garcia-Bulle, Bernardo and Pentland, Alex 'Sandy'. 2019. "Active Fairness in Algorithmic Decision Making." AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. en 10.1145/3306618.3314277 AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Association for Computing Machinery (ACM) ACM
spellingShingle Noriega-Campero, Alejandro
Bakker, Michiel A
Garcia-Bulle, Bernardo
Pentland, Alex 'Sandy'
Active Fairness in Algorithmic Decision Making
title Active Fairness in Algorithmic Decision Making
title_full Active Fairness in Algorithmic Decision Making
title_fullStr Active Fairness in Algorithmic Decision Making
title_full_unstemmed Active Fairness in Algorithmic Decision Making
title_short Active Fairness in Algorithmic Decision Making
title_sort active fairness in algorithmic decision making
url https://hdl.handle.net/1721.1/137087
work_keys_str_mv AT noriegacamperoalejandro activefairnessinalgorithmicdecisionmaking
AT bakkermichiela activefairnessinalgorithmicdecisionmaking
AT garciabullebernardo activefairnessinalgorithmicdecisionmaking
AT pentlandalexsandy activefairnessinalgorithmicdecisionmaking