Contrastive fairness in machine learning
Was it fair that Harry was hired but not Barry? Was it fair that Pam was fired instead of Sam? How can one ensure fairness when an intelligent algorithm takes these decisions instead of a human? How can one ensure that the decisions were taken based on merit and not on protected attributes like race...
Main Authors: | , , |
---|---|
Format: | Journal article |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers
2020
|
_version_ | 1797051236525015040 |
---|---|
author | Chakraborti, T Patra, A Noble, JA |
author_facet | Chakraborti, T Patra, A Noble, JA |
author_sort | Chakraborti, T |
collection | OXFORD |
description | Was it fair that Harry was hired but not Barry? Was it fair that Pam was fired instead of Sam? How can one ensure fairness when an intelligent algorithm takes these decisions instead of a human? How can one ensure that the decisions were taken based on merit and not on protected attributes like race or sex? These are the questions that must be answered now that many decisions in real life can be made through machine learning. However, research in fairness of algorithms has focused on the counterfactual questions “what if?” or “why?”, whereas in real life most subjective questions of consequence are contrastive: “why this but not that?”. We introduce concepts and mathematical tools using causal inference to address contrastive fairness in algorithmic decision-making with illustrative examples. |
first_indexed | 2024-03-06T18:17:05Z |
format | Journal article |
id | oxford-uuid:04f8e7ab-263b-4908-9eaf-ab5bf42cd906 |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-06T18:17:05Z |
publishDate | 2020 |
publisher | Institute of Electrical and Electronics Engineers |
record_format | dspace |
spelling | oxford-uuid:04f8e7ab-263b-4908-9eaf-ab5bf42cd9062022-03-26T08:54:43ZContrastive fairness in machine learningJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:04f8e7ab-263b-4908-9eaf-ab5bf42cd906EnglishSymplectic ElementsInstitute of Electrical and Electronics Engineers2020Chakraborti, TPatra, ANoble, JAWas it fair that Harry was hired but not Barry? Was it fair that Pam was fired instead of Sam? How can one ensure fairness when an intelligent algorithm takes these decisions instead of a human? How can one ensure that the decisions were taken based on merit and not on protected attributes like race or sex? These are the questions that must be answered now that many decisions in real life can be made through machine learning. However, research in fairness of algorithms has focused on the counterfactual questions “what if?” or “why?”, whereas in real life most subjective questions of consequence are contrastive: “why this but not that?”. We introduce concepts and mathematical tools using causal inference to address contrastive fairness in algorithmic decision-making with illustrative examples. |
spellingShingle | Chakraborti, T Patra, A Noble, JA Contrastive fairness in machine learning |
title | Contrastive fairness in machine learning |
title_full | Contrastive fairness in machine learning |
title_fullStr | Contrastive fairness in machine learning |
title_full_unstemmed | Contrastive fairness in machine learning |
title_short | Contrastive fairness in machine learning |
title_sort | contrastive fairness in machine learning |
work_keys_str_mv | AT chakrabortit contrastivefairnessinmachinelearning AT patraa contrastivefairnessinmachinelearning AT nobleja contrastivefairnessinmachinelearning |