MESAC: Learning to Remove Mismatches via Maximizing the Expected Score of Sample Consensuses

Most learning-based methods require labelling the training data, which is time-consuming and gives rise to wrong labels. To address the labelling issues thoroughly, we propose an unsupervised learning framework to remove mismatches by maximizing the expected score of sample consensuses (MESAC). The...

Full description

Bibliographic Details
Main Authors: Shiyu Chen, Cailong Deng, Yong Zhang, Yong Wang, Qixin Zhang, Feiyan Chen, Zhimin Zhou
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10138163/
_version_ 1797804099861741568
author Shiyu Chen
Cailong Deng
Yong Zhang
Yong Wang
Qixin Zhang
Feiyan Chen
Zhimin Zhou
author_facet Shiyu Chen
Cailong Deng
Yong Zhang
Yong Wang
Qixin Zhang
Feiyan Chen
Zhimin Zhou
author_sort Shiyu Chen
collection DOAJ
description Most learning-based methods require labelling the training data, which is time-consuming and gives rise to wrong labels. To address the labelling issues thoroughly, we propose an unsupervised learning framework to remove mismatches by maximizing the expected score of sample consensuses (MESAC). The proposed MESAC can train various permutation invariant networks (PINs) based on training data with no labels, and has three distinct merits: 1) the framework can train various PINs in an unsupervised mode such that these are immune to wrong labels; 2) the gradients of the expected score are explicitly calculated by a revised score-function estimator, which can avoid gradient explosion; 3) the distribution of matching probabilities is learned from the PIN and precisely modelled by a categorical distribution, which can decrease the sampling times and improve the computational efficiency accordingly. Experiments of testing datasets disclose that mean recall is increased by at most 77% when pure PINs are embedded in MESAC, and mean precision is also improved by 16%. Applications in pose recovery indicate that the success rates of MESAC-integrated PINs outperform the compared methods when training with neither matching labels nor ground truth epipolar geometry (EG) constraints, showing the great potential of MESAC in mismatch removal.
first_indexed 2024-03-13T05:32:04Z
format Article
id doaj.art-fac627cc3ef54c7cb76bc4466e4de9be
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-03-13T05:32:04Z
publishDate 2023-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-fac627cc3ef54c7cb76bc4466e4de9be2023-06-14T23:00:14ZengIEEEIEEE Access2169-35362023-01-0111571175713610.1109/ACCESS.2023.328082410138163MESAC: Learning to Remove Mismatches via Maximizing the Expected Score of Sample ConsensusesShiyu Chen0https://orcid.org/0000-0001-8715-0025Cailong Deng1https://orcid.org/0000-0001-8151-8183Yong Zhang2https://orcid.org/0000-0003-0230-1395Yong Wang3Qixin Zhang4Feiyan Chen5Zhimin Zhou6School of Geographic Sciences, Xinyang Normal University, Xinyang, ChinaSichuan Institute of Land Science and Technology (Sichuan Center of Satellite Application Technology), Chengdu, ChinaVisiontek Inc., Wuhan, ChinaSichuan Institute of Land Science and Technology (Sichuan Center of Satellite Application Technology), Chengdu, ChinaSchool of Geographic Sciences, Xinyang Normal University, Xinyang, ChinaSchool of Geographic Sciences, Xinyang Normal University, Xinyang, ChinaSichuan Institute of Land Science and Technology (Sichuan Center of Satellite Application Technology), Chengdu, ChinaMost learning-based methods require labelling the training data, which is time-consuming and gives rise to wrong labels. To address the labelling issues thoroughly, we propose an unsupervised learning framework to remove mismatches by maximizing the expected score of sample consensuses (MESAC). The proposed MESAC can train various permutation invariant networks (PINs) based on training data with no labels, and has three distinct merits: 1) the framework can train various PINs in an unsupervised mode such that these are immune to wrong labels; 2) the gradients of the expected score are explicitly calculated by a revised score-function estimator, which can avoid gradient explosion; 3) the distribution of matching probabilities is learned from the PIN and precisely modelled by a categorical distribution, which can decrease the sampling times and improve the computational efficiency accordingly. Experiments of testing datasets disclose that mean recall is increased by at most 77% when pure PINs are embedded in MESAC, and mean precision is also improved by 16%. Applications in pose recovery indicate that the success rates of MESAC-integrated PINs outperform the compared methods when training with neither matching labels nor ground truth epipolar geometry (EG) constraints, showing the great potential of MESAC in mismatch removal.https://ieeexplore.ieee.org/document/10138163/Mismatch removalpermutation invariant networkcategorical distributiongradient explosionscore-function estimator
spellingShingle Shiyu Chen
Cailong Deng
Yong Zhang
Yong Wang
Qixin Zhang
Feiyan Chen
Zhimin Zhou
MESAC: Learning to Remove Mismatches via Maximizing the Expected Score of Sample Consensuses
IEEE Access
Mismatch removal
permutation invariant network
categorical distribution
gradient explosion
score-function estimator
title MESAC: Learning to Remove Mismatches via Maximizing the Expected Score of Sample Consensuses
title_full MESAC: Learning to Remove Mismatches via Maximizing the Expected Score of Sample Consensuses
title_fullStr MESAC: Learning to Remove Mismatches via Maximizing the Expected Score of Sample Consensuses
title_full_unstemmed MESAC: Learning to Remove Mismatches via Maximizing the Expected Score of Sample Consensuses
title_short MESAC: Learning to Remove Mismatches via Maximizing the Expected Score of Sample Consensuses
title_sort mesac learning to remove mismatches via maximizing the expected score of sample consensuses
topic Mismatch removal
permutation invariant network
categorical distribution
gradient explosion
score-function estimator
url https://ieeexplore.ieee.org/document/10138163/
work_keys_str_mv AT shiyuchen mesaclearningtoremovemismatchesviamaximizingtheexpectedscoreofsampleconsensuses
AT cailongdeng mesaclearningtoremovemismatchesviamaximizingtheexpectedscoreofsampleconsensuses
AT yongzhang mesaclearningtoremovemismatchesviamaximizingtheexpectedscoreofsampleconsensuses
AT yongwang mesaclearningtoremovemismatchesviamaximizingtheexpectedscoreofsampleconsensuses
AT qixinzhang mesaclearningtoremovemismatchesviamaximizingtheexpectedscoreofsampleconsensuses
AT feiyanchen mesaclearningtoremovemismatchesviamaximizingtheexpectedscoreofsampleconsensuses
AT zhiminzhou mesaclearningtoremovemismatchesviamaximizingtheexpectedscoreofsampleconsensuses