Bias preservation in machine learning: the legality of fairness metrics under EU non-discrimination law
Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvan...
Main Authors: | Wachter, S, Mittelstadt, B, Russell, C |
---|---|
Format: | Journal article |
Language: | English |
Published: |
West Virginia University
2021
|
Similar Items
-
Why fairness cannot be automated: bridging the gap between EU non-discrimination law and AI
by: Wachter, S, et al.
Published: (2021) -
The unfairness of fair machine learning: leveling down and strict egalitarianism by default
by: Mittelstadt, B, et al.
Published: (2024) -
Discrimination, Bias, Fairness, and Trustworthy AI
by: Daniel Varona, et al.
Published: (2022-06-01) -
Do large language models have a legal duty to tell the truth?
by: Wachter, S, et al.
Published: (2024) -
The theory of artificial immutability: protecting algorithmic groups under anti-discrimination law
by: Wachter, S
Published: (2023)