Fast Dual-Feature Extraction Based on Tightly Coupled Lightweight Network for Visual Place Recognition

Visual place recognition (VPR) is a task that aims to predict the location of an image based on the existing images. Because image data can often be massive, extracting features efficiently is critical. To solve the problems of model redundancy and poor time efficiency in feature extraction, this st...

Full description

Bibliographic Details
Main Authors: Xiaofei Hu, Yang Zhou, Liang Lyu, Chaozhen Lan, Qunshan Shi, Mingbo Hou
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10313262/
_version_ 1797545244813688832
author Xiaofei Hu
Yang Zhou
Liang Lyu
Chaozhen Lan
Qunshan Shi
Mingbo Hou
author_facet Xiaofei Hu
Yang Zhou
Liang Lyu
Chaozhen Lan
Qunshan Shi
Mingbo Hou
author_sort Xiaofei Hu
collection DOAJ
description Visual place recognition (VPR) is a task that aims to predict the location of an image based on the existing images. Because image data can often be massive, extracting features efficiently is critical. To solve the problems of model redundancy and poor time efficiency in feature extraction, this study proposes a fast dual-feature extraction method based on a tightly coupled lightweight network. The tightly coupled network extracts local and global features in a unified model which has a lightweight backbone. Learned step size quantization is then performed to reduce the computational overhead in the inference stage. Additionally, an efficient channel attention module ensures feature representation ability. Efficiency and performance experiments on different hardware platforms showed that the proposed algorithm incurred significant runtime savings for feature extraction, and the inference was 2.9–4.0 times faster than that in the general model. The experimental results confirmed that the proposed method can significantly improve VPR efficiency while ensuring accuracy.
first_indexed 2024-03-10T14:12:45Z
format Article
id doaj.art-7cf07a1847f24d1c8465ec422e029b80
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-03-10T14:12:45Z
publishDate 2023-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-7cf07a1847f24d1c8465ec422e029b802023-11-21T00:01:23ZengIEEEIEEE Access2169-35362023-01-011112785512786510.1109/ACCESS.2023.333137110313262Fast Dual-Feature Extraction Based on Tightly Coupled Lightweight Network for Visual Place RecognitionXiaofei Hu0Yang Zhou1https://orcid.org/0000-0001-6667-3353Liang Lyu2https://orcid.org/0000-0003-1168-210XChaozhen Lan3https://orcid.org/0000-0002-6860-3882Qunshan Shi4Mingbo Hou5Institute of Geospatial Information, PLA Strategic Support Force Information Engineering University, Zhengzhou, ChinaInstitute of Geospatial Information, PLA Strategic Support Force Information Engineering University, Zhengzhou, ChinaInstitute of Geospatial Information, PLA Strategic Support Force Information Engineering University, Zhengzhou, ChinaInstitute of Geospatial Information, PLA Strategic Support Force Information Engineering University, Zhengzhou, ChinaInstitute of Geospatial Information, PLA Strategic Support Force Information Engineering University, Zhengzhou, ChinaInstitute of Geospatial Information, PLA Strategic Support Force Information Engineering University, Zhengzhou, ChinaVisual place recognition (VPR) is a task that aims to predict the location of an image based on the existing images. Because image data can often be massive, extracting features efficiently is critical. To solve the problems of model redundancy and poor time efficiency in feature extraction, this study proposes a fast dual-feature extraction method based on a tightly coupled lightweight network. The tightly coupled network extracts local and global features in a unified model which has a lightweight backbone. Learned step size quantization is then performed to reduce the computational overhead in the inference stage. Additionally, an efficient channel attention module ensures feature representation ability. Efficiency and performance experiments on different hardware platforms showed that the proposed algorithm incurred significant runtime savings for feature extraction, and the inference was 2.9–4.0 times faster than that in the general model. The experimental results confirmed that the proposed method can significantly improve VPR efficiency while ensuring accuracy.https://ieeexplore.ieee.org/document/10313262/Visual place recognitiondual-feature extractiontightly coupledlearned step size quantization
spellingShingle Xiaofei Hu
Yang Zhou
Liang Lyu
Chaozhen Lan
Qunshan Shi
Mingbo Hou
Fast Dual-Feature Extraction Based on Tightly Coupled Lightweight Network for Visual Place Recognition
IEEE Access
Visual place recognition
dual-feature extraction
tightly coupled
learned step size quantization
title Fast Dual-Feature Extraction Based on Tightly Coupled Lightweight Network for Visual Place Recognition
title_full Fast Dual-Feature Extraction Based on Tightly Coupled Lightweight Network for Visual Place Recognition
title_fullStr Fast Dual-Feature Extraction Based on Tightly Coupled Lightweight Network for Visual Place Recognition
title_full_unstemmed Fast Dual-Feature Extraction Based on Tightly Coupled Lightweight Network for Visual Place Recognition
title_short Fast Dual-Feature Extraction Based on Tightly Coupled Lightweight Network for Visual Place Recognition
title_sort fast dual feature extraction based on tightly coupled lightweight network for visual place recognition
topic Visual place recognition
dual-feature extraction
tightly coupled
learned step size quantization
url https://ieeexplore.ieee.org/document/10313262/
work_keys_str_mv AT xiaofeihu fastdualfeatureextractionbasedontightlycoupledlightweightnetworkforvisualplacerecognition
AT yangzhou fastdualfeatureextractionbasedontightlycoupledlightweightnetworkforvisualplacerecognition
AT lianglyu fastdualfeatureextractionbasedontightlycoupledlightweightnetworkforvisualplacerecognition
AT chaozhenlan fastdualfeatureextractionbasedontightlycoupledlightweightnetworkforvisualplacerecognition
AT qunshanshi fastdualfeatureextractionbasedontightlycoupledlightweightnetworkforvisualplacerecognition
AT mingbohou fastdualfeatureextractionbasedontightlycoupledlightweightnetworkforvisualplacerecognition