Return of the devil in the details: delving deep into convolutional nets
The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare...
Asıl Yazarlar: | , , , |
---|---|
Materyal Türü: | Conference item |
Dil: | English |
Baskı/Yayın Bilgisi: |
British Machine Vision Association and Society for Pattern Recognition
2014
|
_version_ | 1826315116312788992 |
---|---|
author | Chatfield, K Simonyan, K Vedaldi, A Zisserman, A |
author_facet | Chatfield, K Simonyan, K Vedaldi, A Zisserman, A |
author_sort | Chatfield, K |
collection | OXFORD |
description | The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available. |
first_indexed | 2024-12-09T03:20:07Z |
format | Conference item |
id | oxford-uuid:48a4e3f3-5d16-4cb1-95e4-e82897a5c5e3 |
institution | University of Oxford |
language | English |
last_indexed | 2024-12-09T03:20:07Z |
publishDate | 2014 |
publisher | British Machine Vision Association and Society for Pattern Recognition |
record_format | dspace |
spelling | oxford-uuid:48a4e3f3-5d16-4cb1-95e4-e82897a5c5e32024-11-05T15:12:25ZReturn of the devil in the details: delving deep into convolutional netsConference itemhttp://purl.org/coar/resource_type/c_5794uuid:48a4e3f3-5d16-4cb1-95e4-e82897a5c5e3EnglishSymplectic ElementsBritish Machine Vision Association and Society for Pattern Recognition2014Chatfield, KSimonyan, KVedaldi, AZisserman, AThe latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available. |
spellingShingle | Chatfield, K Simonyan, K Vedaldi, A Zisserman, A Return of the devil in the details: delving deep into convolutional nets |
title | Return of the devil in the details: delving deep into convolutional nets |
title_full | Return of the devil in the details: delving deep into convolutional nets |
title_fullStr | Return of the devil in the details: delving deep into convolutional nets |
title_full_unstemmed | Return of the devil in the details: delving deep into convolutional nets |
title_short | Return of the devil in the details: delving deep into convolutional nets |
title_sort | return of the devil in the details delving deep into convolutional nets |
work_keys_str_mv | AT chatfieldk returnofthedevilinthedetailsdelvingdeepintoconvolutionalnets AT simonyank returnofthedevilinthedetailsdelvingdeepintoconvolutionalnets AT vedaldia returnofthedevilinthedetailsdelvingdeepintoconvolutionalnets AT zissermana returnofthedevilinthedetailsdelvingdeepintoconvolutionalnets |