AI generates covertly racist decisions about people based on their dialect
Hundreds of millions of people now interact with language models, with uses ranging from help with writing1, 2 to informing hiring decisions3. However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as...
Main Authors: | , , , |
---|---|
Format: | Journal article |
Language: | English |
Published: |
Nature Research
2024
|