Turning a blind eye: explicit removal of biases and variation from deep neural network embeddings
Neural networks achieve the state-of-the-art in image classification tasks. However, they can encode spurious variations or biases that may be present in the training data. For example, training an age predictor on a dataset that is not balanced for gender can lead to gender biased predicitons (e.g....
Autores principales: | , , |
---|---|
Formato: | Internet publication |
Lenguaje: | English |
Publicado: |
ArXiv
2018
|