Open-RadVLAD: fast shift and rotation invariant radar place recognition

—Radar place recognition often involves encoding a live scan as a vector and matching this vector to a database in order to recognise that the vehicle is in a location that it has visited before. Radar is inherently robust to lighting or weather conditions, but place recognition with this sensor is...

Full description

Bibliographic Details
Main Authors: Gadd, M, Newman, P
Format: Conference item
Language:English
Published: IEEE 2024
_version_ 1797113272766300160
author Gadd, M
Newman, P
author_facet Gadd, M
Newman, P
author_sort Gadd, M
collection OXFORD
description —Radar place recognition often involves encoding a live scan as a vector and matching this vector to a database in order to recognise that the vehicle is in a location that it has visited before. Radar is inherently robust to lighting or weather conditions, but place recognition with this sensor is still affected by: (1) viewpoint variation, i.e. translation and rotation, (2) sensor artefacts or “noises”. For 360◦ scanning radar, rotation is readily dealt with by in some way aggregating across azimuths. Also, we argue in this work that it is more critical to deal with the richness of representation and sensor noises than it is to deal with translational invariance – particularly in urban driving where vehicles predominantly follow the same lane when repeating a route. In our method, for computational efficiency, we use only the polar representation. For partial translation invariance and robustness to signal noise, we use only a onedimensional Fourier Transform along radial returns. We also achieve rotational invariance and a very discriminative descriptor space by building a vector of locally aggregated descriptors. Our method is more comprehensively tested than all prior radar place recognition work – over an exhaustive combination of all 870 pairs of trajectories from 30 Oxford Radar RobotCar Dataset sequences (each ≈10 km). Code and detailed results are provided at github.com/mttgdd/open-radvlad, as an open implementation and benchmark for future work in this area. We achieve a median of 91.52 % in Recall@1, outstripping the 69.55 % for the only other open implementation, RaPlace, and at a fraction of its computational cost (relying on fewer integral transforms e.g. Radon, Fourier, and inverse Fourier).
first_indexed 2024-03-07T08:29:36Z
format Conference item
id oxford-uuid:b11b1f95-4c04-4175-9a46-6ee05e84377b
institution University of Oxford
language English
last_indexed 2024-04-23T08:26:11Z
publishDate 2024
publisher IEEE
record_format dspace
spelling oxford-uuid:b11b1f95-4c04-4175-9a46-6ee05e84377b2024-04-15T09:31:39ZOpen-RadVLAD: fast shift and rotation invariant radar place recognitionConference itemhttp://purl.org/coar/resource_type/c_5794uuid:b11b1f95-4c04-4175-9a46-6ee05e84377bEnglishSymplectic ElementsIEEE2024Gadd, MNewman, P—Radar place recognition often involves encoding a live scan as a vector and matching this vector to a database in order to recognise that the vehicle is in a location that it has visited before. Radar is inherently robust to lighting or weather conditions, but place recognition with this sensor is still affected by: (1) viewpoint variation, i.e. translation and rotation, (2) sensor artefacts or “noises”. For 360◦ scanning radar, rotation is readily dealt with by in some way aggregating across azimuths. Also, we argue in this work that it is more critical to deal with the richness of representation and sensor noises than it is to deal with translational invariance – particularly in urban driving where vehicles predominantly follow the same lane when repeating a route. In our method, for computational efficiency, we use only the polar representation. For partial translation invariance and robustness to signal noise, we use only a onedimensional Fourier Transform along radial returns. We also achieve rotational invariance and a very discriminative descriptor space by building a vector of locally aggregated descriptors. Our method is more comprehensively tested than all prior radar place recognition work – over an exhaustive combination of all 870 pairs of trajectories from 30 Oxford Radar RobotCar Dataset sequences (each ≈10 km). Code and detailed results are provided at github.com/mttgdd/open-radvlad, as an open implementation and benchmark for future work in this area. We achieve a median of 91.52 % in Recall@1, outstripping the 69.55 % for the only other open implementation, RaPlace, and at a fraction of its computational cost (relying on fewer integral transforms e.g. Radon, Fourier, and inverse Fourier).
spellingShingle Gadd, M
Newman, P
Open-RadVLAD: fast shift and rotation invariant radar place recognition
title Open-RadVLAD: fast shift and rotation invariant radar place recognition
title_full Open-RadVLAD: fast shift and rotation invariant radar place recognition
title_fullStr Open-RadVLAD: fast shift and rotation invariant radar place recognition
title_full_unstemmed Open-RadVLAD: fast shift and rotation invariant radar place recognition
title_short Open-RadVLAD: fast shift and rotation invariant radar place recognition
title_sort open radvlad fast shift and rotation invariant radar place recognition
work_keys_str_mv AT gaddm openradvladfastshiftandrotationinvariantradarplacerecognition
AT newmanp openradvladfastshiftandrotationinvariantradarplacerecognition