SVS-VPR: A Semantic Visual and Spatial Information-Based Hierarchical Visual Place Recognition for Autonomous Navigation in Challenging Environmental Conditions

Robust visual place recognition (VPR) enables mobile robots to identify previously visited locations. For this purpose, the extracted visual information and place matching method plays a significant role. In this paper, we critically review the existing VPR methods and group them into three major ca...

Full description

Bibliographic Details
Main Authors: Saba Arshad, Tae-Hyoung Park
Format: Article
Language:English
Published: MDPI AG 2024-01-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/24/3/906
_version_ 1797318182772408320
author Saba Arshad
Tae-Hyoung Park
author_facet Saba Arshad
Tae-Hyoung Park
author_sort Saba Arshad
collection DOAJ
description Robust visual place recognition (VPR) enables mobile robots to identify previously visited locations. For this purpose, the extracted visual information and place matching method plays a significant role. In this paper, we critically review the existing VPR methods and group them into three major categories based on visual information used, i.e., handcrafted features, deep features, and semantics. Focusing the benefits of convolutional neural networks (CNNs) and semantics, and limitations of existing research, we propose a robust appearance-based place recognition method, termed SVS-VPR, which is implemented as a hierarchical model consisting of two major components: global scene-based and local feature-based matching. The global scene semantics are extracted and compared with pre-visited images to filter the match candidates while reducing the search space and computational cost. The local feature-based matching involves the extraction of robust local features from CNN possessing invariant properties against environmental conditions and a place matching method utilizing semantic, visual, and spatial information. SVS-VPR is evaluated on publicly available benchmark datasets using true positive detection rate, recall at 100% precision, and area under the curve. Experimental findings demonstrate that SVS-VPR surpasses several state-of-the-art deep learning-based methods, boosting robustness against significant changes in viewpoint and appearance while maintaining efficient matching time performance.
first_indexed 2024-03-08T03:48:45Z
format Article
id doaj.art-60f4f372c65947f68256d2512c53d707
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-08T03:48:45Z
publishDate 2024-01-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-60f4f372c65947f68256d2512c53d7072024-02-09T15:22:13ZengMDPI AGSensors1424-82202024-01-0124390610.3390/s24030906SVS-VPR: A Semantic Visual and Spatial Information-Based Hierarchical Visual Place Recognition for Autonomous Navigation in Challenging Environmental ConditionsSaba Arshad0Tae-Hyoung Park1Industrial Artificial Intelligence Research Center, Chungbuk National University, Cheongju 28644, Republic of KoreaDepartment of Intelligent Systems and Robotics, Chungbuk National University, Cheongju 28644, Republic of KoreaRobust visual place recognition (VPR) enables mobile robots to identify previously visited locations. For this purpose, the extracted visual information and place matching method plays a significant role. In this paper, we critically review the existing VPR methods and group them into three major categories based on visual information used, i.e., handcrafted features, deep features, and semantics. Focusing the benefits of convolutional neural networks (CNNs) and semantics, and limitations of existing research, we propose a robust appearance-based place recognition method, termed SVS-VPR, which is implemented as a hierarchical model consisting of two major components: global scene-based and local feature-based matching. The global scene semantics are extracted and compared with pre-visited images to filter the match candidates while reducing the search space and computational cost. The local feature-based matching involves the extraction of robust local features from CNN possessing invariant properties against environmental conditions and a place matching method utilizing semantic, visual, and spatial information. SVS-VPR is evaluated on publicly available benchmark datasets using true positive detection rate, recall at 100% precision, and area under the curve. Experimental findings demonstrate that SVS-VPR surpasses several state-of-the-art deep learning-based methods, boosting robustness against significant changes in viewpoint and appearance while maintaining efficient matching time performance.https://www.mdpi.com/1424-8220/24/3/906convolution featuresvisual place recognitionsemantic segmentationneural networks
spellingShingle Saba Arshad
Tae-Hyoung Park
SVS-VPR: A Semantic Visual and Spatial Information-Based Hierarchical Visual Place Recognition for Autonomous Navigation in Challenging Environmental Conditions
Sensors
convolution features
visual place recognition
semantic segmentation
neural networks
title SVS-VPR: A Semantic Visual and Spatial Information-Based Hierarchical Visual Place Recognition for Autonomous Navigation in Challenging Environmental Conditions
title_full SVS-VPR: A Semantic Visual and Spatial Information-Based Hierarchical Visual Place Recognition for Autonomous Navigation in Challenging Environmental Conditions
title_fullStr SVS-VPR: A Semantic Visual and Spatial Information-Based Hierarchical Visual Place Recognition for Autonomous Navigation in Challenging Environmental Conditions
title_full_unstemmed SVS-VPR: A Semantic Visual and Spatial Information-Based Hierarchical Visual Place Recognition for Autonomous Navigation in Challenging Environmental Conditions
title_short SVS-VPR: A Semantic Visual and Spatial Information-Based Hierarchical Visual Place Recognition for Autonomous Navigation in Challenging Environmental Conditions
title_sort svs vpr a semantic visual and spatial information based hierarchical visual place recognition for autonomous navigation in challenging environmental conditions
topic convolution features
visual place recognition
semantic segmentation
neural networks
url https://www.mdpi.com/1424-8220/24/3/906
work_keys_str_mv AT sabaarshad svsvprasemanticvisualandspatialinformationbasedhierarchicalvisualplacerecognitionforautonomousnavigationinchallengingenvironmentalconditions
AT taehyoungpark svsvprasemanticvisualandspatialinformationbasedhierarchicalvisualplacerecognitionforautonomousnavigationinchallengingenvironmentalconditions