DSD-MatchingNet: Deformable Sparse-to-Dense Feature Matching for Learning Accurate Correspondences

Background: Exploring the correspondences across multi-view images is the basis of many computer vision tasks. However, most existing methods are limited on accuracy under challenging conditions. In order to learn more robust and accurate correspondences, we propose the DSD-MatchingNet for local fea...

Full description

Bibliographic Details
Main Authors: Yicheng Zhao, Han Zhang, Ping Lu, Ping Li, EnHua Wu, Bin Sheng
Format: Article
Language:English
Published: KeAi Communications Co., Ltd. 2022-10-01
Series:Virtual Reality & Intelligent Hardware
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2096579622000821
Description
Summary:Background: Exploring the correspondences across multi-view images is the basis of many computer vision tasks. However, most existing methods are limited on accuracy under challenging conditions. In order to learn more robust and accurate correspondences, we propose the DSD-MatchingNet for local feature matching in this paper. First, we develop a deformable feature extraction module to obtain multi-level feature maps, which harvests contextual information from dynamic receptive fields. The dynamic receptive fields provided by deformable convolution network ensures our method to obtain dense and robust correspondences. Second, we utilize the sparse-to-dense matching with the symmetry of correspondence to implement accurate pixel-level matching, which enables our method to produce more accurate correspondences. Experiments have shown that our proposed DSD-MatchingNet achieves a better performance on image matching benchmark, as well as on visual localization benchmark. Specifically, our method achieves 91.3% mean matching accuracy on HPatches dataset and 99.3% visual localization recalls on Aachen Day-Night dataset.
ISSN:2096-5796