Attesting Digital Discrimination Using Norms

More and more decisions are delegated to Machine Learning (ML) and automatic decision systems recently. Despite initial misconceptions considering these systems unbiased and fair, recent cases such as racist algorithms being used to inform parole decisions in the US, low-income neighborhood's t...

Full description

Bibliographic Details
Main Authors: Natalia Criado, Xavier Ferrer, José M. Such
Format: Article
Language:English
Published: Universidad Internacional de La Rioja (UNIR) 2021-03-01
Series:International Journal of Interactive Multimedia and Artificial Intelligence
Subjects:
Online Access:https://www.ijimai.org/journal/bibcite/reference/2898
_version_ 1818736259205431296
author Natalia Criado
Xavier Ferrer
José M. Such
author_facet Natalia Criado
Xavier Ferrer
José M. Such
author_sort Natalia Criado
collection DOAJ
description More and more decisions are delegated to Machine Learning (ML) and automatic decision systems recently. Despite initial misconceptions considering these systems unbiased and fair, recent cases such as racist algorithms being used to inform parole decisions in the US, low-income neighborhood's targeted with high-interest loans and low credit scores, and women being undervalued by online marketing, fueled public distrust in machine learning. This poses a significant challenge to the adoption of ML by companies or public sector organisations, despite ML having the potential to lead to significant reductions in cost and more efficient decisions, and is motivating research in the area of algorithmic fairness and fair ML. Much of that research is aimed at providing detailed statistics, metrics and algorithms which are difficult to interpret and use by someone without technical skills. This paper tries to bridge the gap between lay users and fairness metrics by using simpler notions and concepts to represent and reason about digital discrimination. In particular, we use norms as an abstraction to communicate situations that may lead to algorithms committing discrimination. In particular, we formalise non-discrimination norms in the context of ML systems and propose an algorithm to attest whether ML systems violate these norms.
first_indexed 2024-12-18T00:34:18Z
format Article
id doaj.art-a06e47e312a24b6e874823d171797655
institution Directory Open Access Journal
issn 1989-1660
1989-1660
language English
last_indexed 2024-12-18T00:34:18Z
publishDate 2021-03-01
publisher Universidad Internacional de La Rioja (UNIR)
record_format Article
series International Journal of Interactive Multimedia and Artificial Intelligence
spelling doaj.art-a06e47e312a24b6e874823d1717976552022-12-21T21:27:03ZengUniversidad Internacional de La Rioja (UNIR)International Journal of Interactive Multimedia and Artificial Intelligence1989-16601989-16602021-03-0165162310.9781/ijimai.2021.02.008ijimai.2021.02.008Attesting Digital Discrimination Using NormsNatalia CriadoXavier FerrerJosé M. SuchMore and more decisions are delegated to Machine Learning (ML) and automatic decision systems recently. Despite initial misconceptions considering these systems unbiased and fair, recent cases such as racist algorithms being used to inform parole decisions in the US, low-income neighborhood's targeted with high-interest loans and low credit scores, and women being undervalued by online marketing, fueled public distrust in machine learning. This poses a significant challenge to the adoption of ML by companies or public sector organisations, despite ML having the potential to lead to significant reductions in cost and more efficient decisions, and is motivating research in the area of algorithmic fairness and fair ML. Much of that research is aimed at providing detailed statistics, metrics and algorithms which are difficult to interpret and use by someone without technical skills. This paper tries to bridge the gap between lay users and fairness metrics by using simpler notions and concepts to represent and reason about digital discrimination. In particular, we use norms as an abstraction to communicate situations that may lead to algorithms committing discrimination. In particular, we formalise non-discrimination norms in the context of ML systems and propose an algorithm to attest whether ML systems violate these norms.https://www.ijimai.org/journal/bibcite/reference/2898discriminationattestationnormsmachine learning
spellingShingle Natalia Criado
Xavier Ferrer
José M. Such
Attesting Digital Discrimination Using Norms
International Journal of Interactive Multimedia and Artificial Intelligence
discrimination
attestation
norms
machine learning
title Attesting Digital Discrimination Using Norms
title_full Attesting Digital Discrimination Using Norms
title_fullStr Attesting Digital Discrimination Using Norms
title_full_unstemmed Attesting Digital Discrimination Using Norms
title_short Attesting Digital Discrimination Using Norms
title_sort attesting digital discrimination using norms
topic discrimination
attestation
norms
machine learning
url https://www.ijimai.org/journal/bibcite/reference/2898
work_keys_str_mv AT nataliacriado attestingdigitaldiscriminationusingnorms
AT xavierferrer attestingdigitaldiscriminationusingnorms
AT josemsuch attestingdigitaldiscriminationusingnorms