Toward Precise Ambiguity-Aware Cross-Modality Global Self-Localization

There are significant advances in GNSS-free cross-modality self-localization of self-driving vehicles. Recent methods focus on learnable features for both cross-modal global localization via place recognition (PR) and local pose tracking, however they lack means of combining them in a complete local...

Full description

Bibliographic Details
Main Authors: Niklas Stannartz, Stefan Schutte, Markus Kuhn, Torsten Bertram
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10151856/
_version_ 1797798132811038720
author Niklas Stannartz
Stefan Schutte
Markus Kuhn
Torsten Bertram
author_facet Niklas Stannartz
Stefan Schutte
Markus Kuhn
Torsten Bertram
author_sort Niklas Stannartz
collection DOAJ
description There are significant advances in GNSS-free cross-modality self-localization of self-driving vehicles. Recent methods focus on learnable features for both cross-modal global localization via place recognition (PR) and local pose tracking, however they lack means of combining them in a complete localization pipeline. That is, a pose retrieved from PR has to be validated if it actually represents the true pose. Performing this validation without GNSS measurements makes the localization problem significantly more challenging. In this contribution, we propose a method to precisely localize the ego-vehicle in a high resolution map without GNSS prior. Furthermore, sensor and map data may be of different dimensions (2D / 3D) and modality, i.e. radar, lidar or aerial imagery. We initialize our system with multiple hypotheses retrieved from a PR method and infer the correct hypothesis over time. This multi-hypothesis approach is realized using a Gaussian sum filter which enables an efficient tracking of a low number of hypotheses and further facilitates the inference of our deep sensor-to-map matching network at arbitrarily distant regions simultaneously. We further propose a method to estimate the probability that none of the currently tracked hypotheses is correct. We achieve successful global localization in extensive experiments on the MulRan dataset, outperforming comparative methods even if none of the initial poses from PR was close to the true pose. Due to the flexibility of the approach, we can show state-of-the-art accuracy in lidar-to-aerial-imagery localization on a custom dataset using our pipeline with only minor modifications of the matching model.
first_indexed 2024-03-13T03:58:56Z
format Article
id doaj.art-53190458b6584afd935cbfb08d67ae82
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-03-13T03:58:56Z
publishDate 2023-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-53190458b6584afd935cbfb08d67ae822023-06-21T23:00:24ZengIEEEIEEE Access2169-35362023-01-0111600056002710.1109/ACCESS.2023.328631010151856Toward Precise Ambiguity-Aware Cross-Modality Global Self-LocalizationNiklas Stannartz0https://orcid.org/0000-0002-1798-6713Stefan Schutte1https://orcid.org/0000-0003-3126-4626Markus Kuhn2https://orcid.org/0009-0005-1671-1004Torsten Bertram3https://orcid.org/0000-0002-6096-8190Institute of Control Theory and Systems Engineering, TU Dortmund University, Dortmund, GermanyInstitute of Control Theory and Systems Engineering, TU Dortmund University, Dortmund, GermanyZF Automotive Germany GmbH, Düsseldorf, GermanyInstitute of Control Theory and Systems Engineering, TU Dortmund University, Dortmund, GermanyThere are significant advances in GNSS-free cross-modality self-localization of self-driving vehicles. Recent methods focus on learnable features for both cross-modal global localization via place recognition (PR) and local pose tracking, however they lack means of combining them in a complete localization pipeline. That is, a pose retrieved from PR has to be validated if it actually represents the true pose. Performing this validation without GNSS measurements makes the localization problem significantly more challenging. In this contribution, we propose a method to precisely localize the ego-vehicle in a high resolution map without GNSS prior. Furthermore, sensor and map data may be of different dimensions (2D / 3D) and modality, i.e. radar, lidar or aerial imagery. We initialize our system with multiple hypotheses retrieved from a PR method and infer the correct hypothesis over time. This multi-hypothesis approach is realized using a Gaussian sum filter which enables an efficient tracking of a low number of hypotheses and further facilitates the inference of our deep sensor-to-map matching network at arbitrarily distant regions simultaneously. We further propose a method to estimate the probability that none of the currently tracked hypotheses is correct. We achieve successful global localization in extensive experiments on the MulRan dataset, outperforming comparative methods even if none of the initial poses from PR was close to the true pose. Due to the flexibility of the approach, we can show state-of-the-art accuracy in lidar-to-aerial-imagery localization on a custom dataset using our pipeline with only minor modifications of the matching model.https://ieeexplore.ieee.org/document/10151856/Vehicle self-localizationcross-modality localizationglobal localizationplace recognitionmulti-hypothesis localizationHD map
spellingShingle Niklas Stannartz
Stefan Schutte
Markus Kuhn
Torsten Bertram
Toward Precise Ambiguity-Aware Cross-Modality Global Self-Localization
IEEE Access
Vehicle self-localization
cross-modality localization
global localization
place recognition
multi-hypothesis localization
HD map
title Toward Precise Ambiguity-Aware Cross-Modality Global Self-Localization
title_full Toward Precise Ambiguity-Aware Cross-Modality Global Self-Localization
title_fullStr Toward Precise Ambiguity-Aware Cross-Modality Global Self-Localization
title_full_unstemmed Toward Precise Ambiguity-Aware Cross-Modality Global Self-Localization
title_short Toward Precise Ambiguity-Aware Cross-Modality Global Self-Localization
title_sort toward precise ambiguity aware cross modality global self localization
topic Vehicle self-localization
cross-modality localization
global localization
place recognition
multi-hypothesis localization
HD map
url https://ieeexplore.ieee.org/document/10151856/
work_keys_str_mv AT niklasstannartz towardpreciseambiguityawarecrossmodalityglobalselflocalization
AT stefanschutte towardpreciseambiguityawarecrossmodalityglobalselflocalization
AT markuskuhn towardpreciseambiguityawarecrossmodalityglobalselflocalization
AT torstenbertram towardpreciseambiguityawarecrossmodalityglobalselflocalization