Like trainer, like bot? Inheritance of bias in algorithmic content moderation
The internet has become a central medium through which ‘networked publics’ express their opinions and engage in debate. Offensive comments and personal attacks can inhibit participation in these spaces. Automated content moderation aims to overcome this problem using machine learning classifiers tra...
Main Authors: | Binns, R, Veale, M, Van Kleek, M, Shadbolt, N |
---|---|
Format: | Conference item |
Published: |
Springer
2017
|
Similar Items
-
'It's reducing a human being to a percentage'; Perceptions of justice in algorithmic decisions
by: Binns, R, et al.
Published: (2018) -
Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making
by: Veale, M, et al.
Published: (2018) -
The rise of social machines: the development of a human/digital ecosystem
by: Shadbolt, N, et al.
Published: (2016) -
So, tell me what users want, what they really, really want!
by: Lyngs, U, et al.
Published: (2018) -
Measuring third party tracker power across web and mobile
by: Binns, R, et al.
Published: (2018)