Algorithmic Fairness in Sequential Decision Making

Machine learning algorithms have been used in a wide range of applications, and there are growing concerns about the potential biases of those algorithms. While many solutions have been proposed for addressing biases in predictions from an algorithm, there is still a gap in translating predictions t...

Full description

Bibliographic Details
Main Author: Sun, Yi
Other Authors: Veeramachaneni, Kalyan
Format: Thesis
Published: Massachusetts Institute of Technology 2023
Online Access:https://hdl.handle.net/1721.1/150718
_version_ 1811090883118891008
author Sun, Yi
author2 Veeramachaneni, Kalyan
author_facet Veeramachaneni, Kalyan
Sun, Yi
author_sort Sun, Yi
collection MIT
description Machine learning algorithms have been used in a wide range of applications, and there are growing concerns about the potential biases of those algorithms. While many solutions have been proposed for addressing biases in predictions from an algorithm, there is still a gap in translating predictions to a justified decision. Moreover, even a justified and fair decision could lead to undesirable consequences when decisions create a feedback effect. While numerous solutions have been proposed for achieving fairness in one-shot decision-making, there is a gap in investigating the long-term effects of sequential algorithmic decisions. In this thesis, we focus on studying algorithmic fairness in a sequential decision-making setting. We first study how to translate model predictions to fair decisions. In particular, given predictions from black-box models (machine learning models or human experts), we propose an algorithm based on the classical learning-from-experts scheme to combine predictions and generate a fair and accurate decision. Our theoretical results show that approximate equalized odds can be achieved without sacrificing much regret. We also demonstrate the performance of the algorithm on real data sets commonly used by the fairness community. In the second part of the thesis, we study if enforcing static fair decisions in the sequential setting could lead to long-term equality and improvement of disadvantaged groups under a feedback loop. In particular, we model the interaction between algorithmic decisions and underlying distribution using Markov Decision Model with general transition functions. We propose a new metric that measures the distributional impact of algorithmic decisions as measured by the change in distribution’s center, spread and shape. This metric categorizes the impact into within-group impact and between-group impact, where within-group impact measures how policies impact the distribution within a group, and between-group impact how policies impact the distributions of two population groups differently. Our results show that there is generally a trade-off between utility and between-group impact for threshold policies, and common fairness constraints could lead to "backfire effects" where the impact on groups could be disparate.
first_indexed 2024-09-23T14:53:28Z
format Thesis
id mit-1721.1/150718
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T14:53:28Z
publishDate 2023
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1507182023-05-16T03:12:58Z Algorithmic Fairness in Sequential Decision Making Sun, Yi Veeramachaneni, Kalyan Massachusetts Institute of Technology. Institute for Data, Systems, and Society Machine learning algorithms have been used in a wide range of applications, and there are growing concerns about the potential biases of those algorithms. While many solutions have been proposed for addressing biases in predictions from an algorithm, there is still a gap in translating predictions to a justified decision. Moreover, even a justified and fair decision could lead to undesirable consequences when decisions create a feedback effect. While numerous solutions have been proposed for achieving fairness in one-shot decision-making, there is a gap in investigating the long-term effects of sequential algorithmic decisions. In this thesis, we focus on studying algorithmic fairness in a sequential decision-making setting. We first study how to translate model predictions to fair decisions. In particular, given predictions from black-box models (machine learning models or human experts), we propose an algorithm based on the classical learning-from-experts scheme to combine predictions and generate a fair and accurate decision. Our theoretical results show that approximate equalized odds can be achieved without sacrificing much regret. We also demonstrate the performance of the algorithm on real data sets commonly used by the fairness community. In the second part of the thesis, we study if enforcing static fair decisions in the sequential setting could lead to long-term equality and improvement of disadvantaged groups under a feedback loop. In particular, we model the interaction between algorithmic decisions and underlying distribution using Markov Decision Model with general transition functions. We propose a new metric that measures the distributional impact of algorithmic decisions as measured by the change in distribution’s center, spread and shape. This metric categorizes the impact into within-group impact and between-group impact, where within-group impact measures how policies impact the distribution within a group, and between-group impact how policies impact the distributions of two population groups differently. Our results show that there is generally a trade-off between utility and between-group impact for threshold policies, and common fairness constraints could lead to "backfire effects" where the impact on groups could be disparate. Ph.D. 2023-05-15T19:34:47Z 2023-05-15T19:34:47Z 2023-02 2023-05-08T20:35:30.945Z Thesis https://hdl.handle.net/1721.1/150718 In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology
spellingShingle Sun, Yi
Algorithmic Fairness in Sequential Decision Making
title Algorithmic Fairness in Sequential Decision Making
title_full Algorithmic Fairness in Sequential Decision Making
title_fullStr Algorithmic Fairness in Sequential Decision Making
title_full_unstemmed Algorithmic Fairness in Sequential Decision Making
title_short Algorithmic Fairness in Sequential Decision Making
title_sort algorithmic fairness in sequential decision making
url https://hdl.handle.net/1721.1/150718
work_keys_str_mv AT sunyi algorithmicfairnessinsequentialdecisionmaking