Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching

This work presents a visual information fusion approach for robust probability-oriented feature matching. It is sustained by omnidirectional imaging, and it is tested in a visual localization framework, in mobile robotics. General visual localization methods have been extensively studied and optimiz...

Full description

Bibliographic Details
Main Authors: David Valiente, Luis Payá, Luis M. Jiménez, Jose M. Sebastián, Óscar Reinoso
Format: Article
Language:English
Published: MDPI AG 2018-06-01
Series:Sensors
Subjects:
Online Access:http://www.mdpi.com/1424-8220/18/7/2041
_version_ 1811263193642696704
author David Valiente
Luis Payá
Luis M. Jiménez
Jose M. Sebastián
Óscar Reinoso
author_facet David Valiente
Luis Payá
Luis M. Jiménez
Jose M. Sebastián
Óscar Reinoso
author_sort David Valiente
collection DOAJ
description This work presents a visual information fusion approach for robust probability-oriented feature matching. It is sustained by omnidirectional imaging, and it is tested in a visual localization framework, in mobile robotics. General visual localization methods have been extensively studied and optimized in terms of performance. However, one of the main threats that jeopardizes the final estimation is the presence of outliers. In this paper, we present several contributions to deal with that issue. First, 3D information data, associated with SURF (Speeded-Up Robust Feature) points detected on the images, is inferred under the Bayesian framework established by Gaussian processes (GPs). Such information represents a probability distribution for the feature points’ existence, which is successively fused and updated throughout the robot’s poses. Secondly, this distribution can be properly sampled and projected onto the next 2D image frame in t+1, by means of a filter-motion prediction. This strategy permits obtaining relevant areas in the image reference system, from which probable matches could be detected, in terms of the accumulated probability of feature existence. This approach entails an adaptive probability-oriented matching search, which accounts for significant areas of the image, but it also considers unseen parts of the scene, thanks to an internal modulation of the probability distribution domain, computed in terms of the current uncertainty of the system. The main outcomes confirm a robust feature matching, which permits producing consistent localization estimates, aided by the odometer’s prior to estimate the scale factor. Publicly available datasets have been used to validate the design and operation of the approach. Moreover, the proposal has been compared, firstly with a standard feature matching and secondly with a localization method, based on an inverse depth parametrization. The results confirm the validity of the approach in terms of feature matching, localization accuracy, and time consumption.
first_indexed 2024-04-12T19:40:15Z
format Article
id doaj.art-f79d55fe98c24af8b20fb7f7af94b315
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-04-12T19:40:15Z
publishDate 2018-06-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-f79d55fe98c24af8b20fb7f7af94b3152022-12-22T03:19:07ZengMDPI AGSensors1424-82202018-06-01187204110.3390/s18072041s18072041Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature MatchingDavid Valiente0Luis Payá1Luis M. Jiménez2Jose M. Sebastián3Óscar Reinoso4Department of Systems Engineering and Automation, Miguel Hernández University, Av. de la Universidad s/n. Ed. Innova., 03202 Elche (Alicante), SpainDepartment of Systems Engineering and Automation, Miguel Hernández University, Av. de la Universidad s/n. Ed. Innova., 03202 Elche (Alicante), SpainDepartment of Systems Engineering and Automation, Miguel Hernández University, Av. de la Universidad s/n. Ed. Innova., 03202 Elche (Alicante), SpainCentre for Automation and Robotics (CAR), UPM-CSIC, Technical University of Madrid, C/ José Gutiérrez Abascal, 2, 28006 Madrid, SpainDepartment of Systems Engineering and Automation, Miguel Hernández University, Av. de la Universidad s/n. Ed. Innova., 03202 Elche (Alicante), SpainThis work presents a visual information fusion approach for robust probability-oriented feature matching. It is sustained by omnidirectional imaging, and it is tested in a visual localization framework, in mobile robotics. General visual localization methods have been extensively studied and optimized in terms of performance. However, one of the main threats that jeopardizes the final estimation is the presence of outliers. In this paper, we present several contributions to deal with that issue. First, 3D information data, associated with SURF (Speeded-Up Robust Feature) points detected on the images, is inferred under the Bayesian framework established by Gaussian processes (GPs). Such information represents a probability distribution for the feature points’ existence, which is successively fused and updated throughout the robot’s poses. Secondly, this distribution can be properly sampled and projected onto the next 2D image frame in t+1, by means of a filter-motion prediction. This strategy permits obtaining relevant areas in the image reference system, from which probable matches could be detected, in terms of the accumulated probability of feature existence. This approach entails an adaptive probability-oriented matching search, which accounts for significant areas of the image, but it also considers unseen parts of the scene, thanks to an internal modulation of the probability distribution domain, computed in terms of the current uncertainty of the system. The main outcomes confirm a robust feature matching, which permits producing consistent localization estimates, aided by the odometer’s prior to estimate the scale factor. Publicly available datasets have been used to validate the design and operation of the approach. Moreover, the proposal has been compared, firstly with a standard feature matching and secondly with a localization method, based on an inverse depth parametrization. The results confirm the validity of the approach in terms of feature matching, localization accuracy, and time consumption.http://www.mdpi.com/1424-8220/18/7/2041omnidirectional imagingvisual localizationcatadioptric sensorvisual information fusion
spellingShingle David Valiente
Luis Payá
Luis M. Jiménez
Jose M. Sebastián
Óscar Reinoso
Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching
Sensors
omnidirectional imaging
visual localization
catadioptric sensor
visual information fusion
title Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching
title_full Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching
title_fullStr Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching
title_full_unstemmed Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching
title_short Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching
title_sort visual information fusion through bayesian inference for adaptive probability oriented feature matching
topic omnidirectional imaging
visual localization
catadioptric sensor
visual information fusion
url http://www.mdpi.com/1424-8220/18/7/2041
work_keys_str_mv AT davidvaliente visualinformationfusionthroughbayesianinferenceforadaptiveprobabilityorientedfeaturematching
AT luispaya visualinformationfusionthroughbayesianinferenceforadaptiveprobabilityorientedfeaturematching
AT luismjimenez visualinformationfusionthroughbayesianinferenceforadaptiveprobabilityorientedfeaturematching
AT josemsebastian visualinformationfusionthroughbayesianinferenceforadaptiveprobabilityorientedfeaturematching
AT oscarreinoso visualinformationfusionthroughbayesianinferenceforadaptiveprobabilityorientedfeaturematching