HPatches: A benchmark and evaluation of handcrafted and learned local descriptors
In this paper, a novel benchmark is introduced for evaluating local image descriptors. We demonstrate limitations of the commonly used datasets and evaluation protocols, that lead to ambiguities and contradictory results in the literature. Furthermore, these benchmarks are nearly saturated due to th...
Автори: | , , , , , |
---|---|
Формат: | Journal article |
Мова: | English |
Опубліковано: |
IEEE
2019
|
_version_ | 1826282855935770624 |
---|---|
author | Balntas, V Lenc, K Vedaldi, A Tuytelaars, T Matas, J Mikolajczyk, K |
author_facet | Balntas, V Lenc, K Vedaldi, A Tuytelaars, T Matas, J Mikolajczyk, K |
author_sort | Balntas, V |
collection | OXFORD |
description | In this paper, a novel benchmark is introduced for evaluating local image descriptors. We demonstrate limitations of the commonly used datasets and evaluation protocols, that lead to ambiguities and contradictory results in the literature. Furthermore, these benchmarks are nearly saturated due to the recent improvements in local descriptors obtained by learning from large annotated datasets. To address these issues, we introduce a new large dataset suitable for training and testing modern descriptors, together with strictly defined evaluation protocols in several tasks such as matching, retrieval and verification. This allows for more realistic, thus more reliable comparisons in different application scenarios. We evaluate the performance of several state-of-the-art descriptors and analyse their properties. We show that a simple normalisation of traditional hand-crafted descriptors is able to boost their performance to the level of deep learning based descriptors once realistic benchmarks are considered. Additionally we specify a protocol for learning and evaluating using cross validation. We show that when training state-of-the-art descriptors on this dataset, the traditional verification task is almost entirely saturated. |
first_indexed | 2024-03-07T00:50:08Z |
format | Journal article |
id | oxford-uuid:8617ce2d-80e8-4e39-9ece-948273941b5a |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-07T00:50:08Z |
publishDate | 2019 |
publisher | IEEE |
record_format | dspace |
spelling | oxford-uuid:8617ce2d-80e8-4e39-9ece-948273941b5a2022-03-26T22:01:55ZHPatches: A benchmark and evaluation of handcrafted and learned local descriptorsJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:8617ce2d-80e8-4e39-9ece-948273941b5aEnglishSymplectic Elements at OxfordIEEE2019Balntas, VLenc, KVedaldi, ATuytelaars, TMatas, JMikolajczyk, KIn this paper, a novel benchmark is introduced for evaluating local image descriptors. We demonstrate limitations of the commonly used datasets and evaluation protocols, that lead to ambiguities and contradictory results in the literature. Furthermore, these benchmarks are nearly saturated due to the recent improvements in local descriptors obtained by learning from large annotated datasets. To address these issues, we introduce a new large dataset suitable for training and testing modern descriptors, together with strictly defined evaluation protocols in several tasks such as matching, retrieval and verification. This allows for more realistic, thus more reliable comparisons in different application scenarios. We evaluate the performance of several state-of-the-art descriptors and analyse their properties. We show that a simple normalisation of traditional hand-crafted descriptors is able to boost their performance to the level of deep learning based descriptors once realistic benchmarks are considered. Additionally we specify a protocol for learning and evaluating using cross validation. We show that when training state-of-the-art descriptors on this dataset, the traditional verification task is almost entirely saturated. |
spellingShingle | Balntas, V Lenc, K Vedaldi, A Tuytelaars, T Matas, J Mikolajczyk, K HPatches: A benchmark and evaluation of handcrafted and learned local descriptors |
title | HPatches: A benchmark and evaluation of handcrafted and learned local descriptors |
title_full | HPatches: A benchmark and evaluation of handcrafted and learned local descriptors |
title_fullStr | HPatches: A benchmark and evaluation of handcrafted and learned local descriptors |
title_full_unstemmed | HPatches: A benchmark and evaluation of handcrafted and learned local descriptors |
title_short | HPatches: A benchmark and evaluation of handcrafted and learned local descriptors |
title_sort | hpatches a benchmark and evaluation of handcrafted and learned local descriptors |
work_keys_str_mv | AT balntasv hpatchesabenchmarkandevaluationofhandcraftedandlearnedlocaldescriptors AT lenck hpatchesabenchmarkandevaluationofhandcraftedandlearnedlocaldescriptors AT vedaldia hpatchesabenchmarkandevaluationofhandcraftedandlearnedlocaldescriptors AT tuytelaarst hpatchesabenchmarkandevaluationofhandcraftedandlearnedlocaldescriptors AT matasj hpatchesabenchmarkandevaluationofhandcraftedandlearnedlocaldescriptors AT mikolajczykk hpatchesabenchmarkandevaluationofhandcraftedandlearnedlocaldescriptors |