Smooth loss functions for deep top-k classification
The top-$k$ error is a common measure of performance in machine learning and computer vision. In practice, top-$k$ classification is typically performed with deep neural networks trained with the cross-entropy loss. Theoretical results indeed suggest that cross-entropy is an optimal learning objecti...
Main Authors: | , , |
---|---|
Format: | Conference item |
Published: |
2018
|
_version_ | 1797075232391954432 |
---|---|
author | Berrada, L Zisserman, A Mudigonda, P |
author_facet | Berrada, L Zisserman, A Mudigonda, P |
author_sort | Berrada, L |
collection | OXFORD |
description | The top-$k$ error is a common measure of performance in machine learning and computer vision. In practice, top-$k$ classification is typically performed with deep neural networks trained with the cross-entropy loss. Theoretical results indeed suggest that cross-entropy is an optimal learning objective for such a task in the limit of infinite data. In the context of limited and noisy data however, the use of a loss function that is specifically designed for top-$k$ classification can bring significant improvements. Our empirical evidence suggests that the loss function must be smooth and have non-sparse gradients in order to work well with deep neural networks. Consequently, we introduce a family of smoothed loss functions that are suited to top-$k$ optimization via deep learning. The widely used cross-entropy is a special case of our family. Evaluating our smooth loss functions is computationally challenging: a na{\"i}ve algorithm would require $\mathcal{O}(\binom{n}{k})$ operations, where $n$ is the number of classes. Thanks to a connection to polynomial algebra and a divide-and-conquer approach, we provide an algorithm with a time complexity of $\mathcal{O}(k n)$. Furthermore, we present a novel approximation to obtain fast and stable algorithms on GPUs with single floating point precision. We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of $k=5$. Our investigation reveals that our loss is more robust to noise and overfitting than cross-entropy. |
first_indexed | 2024-03-06T23:47:30Z |
format | Conference item |
id | oxford-uuid:7173df2d-1c81-48cc-9028-3f89b2f325fa |
institution | University of Oxford |
last_indexed | 2024-03-06T23:47:30Z |
publishDate | 2018 |
record_format | dspace |
spelling | oxford-uuid:7173df2d-1c81-48cc-9028-3f89b2f325fa2022-03-26T19:43:40ZSmooth loss functions for deep top-k classificationConference itemhttp://purl.org/coar/resource_type/c_5794uuid:7173df2d-1c81-48cc-9028-3f89b2f325faSymplectic Elements at Oxford2018Berrada, LZisserman, AMudigonda, PThe top-$k$ error is a common measure of performance in machine learning and computer vision. In practice, top-$k$ classification is typically performed with deep neural networks trained with the cross-entropy loss. Theoretical results indeed suggest that cross-entropy is an optimal learning objective for such a task in the limit of infinite data. In the context of limited and noisy data however, the use of a loss function that is specifically designed for top-$k$ classification can bring significant improvements. Our empirical evidence suggests that the loss function must be smooth and have non-sparse gradients in order to work well with deep neural networks. Consequently, we introduce a family of smoothed loss functions that are suited to top-$k$ optimization via deep learning. The widely used cross-entropy is a special case of our family. Evaluating our smooth loss functions is computationally challenging: a na{\"i}ve algorithm would require $\mathcal{O}(\binom{n}{k})$ operations, where $n$ is the number of classes. Thanks to a connection to polynomial algebra and a divide-and-conquer approach, we provide an algorithm with a time complexity of $\mathcal{O}(k n)$. Furthermore, we present a novel approximation to obtain fast and stable algorithms on GPUs with single floating point precision. We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of $k=5$. Our investigation reveals that our loss is more robust to noise and overfitting than cross-entropy. |
spellingShingle | Berrada, L Zisserman, A Mudigonda, P Smooth loss functions for deep top-k classification |
title | Smooth loss functions for deep top-k classification |
title_full | Smooth loss functions for deep top-k classification |
title_fullStr | Smooth loss functions for deep top-k classification |
title_full_unstemmed | Smooth loss functions for deep top-k classification |
title_short | Smooth loss functions for deep top-k classification |
title_sort | smooth loss functions for deep top k classification |
work_keys_str_mv | AT berradal smoothlossfunctionsfordeeptopkclassification AT zissermana smoothlossfunctionsfordeeptopkclassification AT mudigondap smoothlossfunctionsfordeeptopkclassification |