A Multilayer Perceptron-Based Spherical Visual Compass Using Global Features

This paper presents a visual compass method utilizing global features, specifically spherical moments. One of the primary challenges faced by photometric methods employing global features is the variation in the image caused by the appearance and disappearance of regions within the camera’s field of...

Full description

Bibliographic Details
Main Authors: Yao Du, Carlos Mateo, Omar Tahri
Format: Article
Language:English
Published: MDPI AG 2024-03-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/24/7/2246
_version_ 1827286535864582144
author Yao Du
Carlos Mateo
Omar Tahri
author_facet Yao Du
Carlos Mateo
Omar Tahri
author_sort Yao Du
collection DOAJ
description This paper presents a visual compass method utilizing global features, specifically spherical moments. One of the primary challenges faced by photometric methods employing global features is the variation in the image caused by the appearance and disappearance of regions within the camera’s field of view as it moves. Additionally, modeling the impact of translational motion on the values of global features poses a significant challenge, as it is dependent on scene depths, particularly for non-planar scenes. To address these issues, this paper combines the utilization of image masks to mitigate abrupt changes in global feature values and the application of neural networks to tackle the modeling challenge posed by translational motion. By employing masks at various locations within the image, multiple estimations of rotation corresponding to the motion of each selected region can be obtained. Our contribution lies in offering a rapid method for implementing numerous masks on the image with real-time inference speed, rendering it suitable for embedded robot applications. Extensive experiments have been conducted on both real-world and synthetic datasets generated using Blender. The results obtained validate the accuracy, robustness, and real-time performance of the proposed method compared to a state-of-the-art method.
first_indexed 2024-04-24T10:35:07Z
format Article
id doaj.art-e960649b986b425397cdfce2f609c1fb
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-04-24T10:35:07Z
publishDate 2024-03-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-e960649b986b425397cdfce2f609c1fb2024-04-12T13:26:35ZengMDPI AGSensors1424-82202024-03-01247224610.3390/s24072246A Multilayer Perceptron-Based Spherical Visual Compass Using Global FeaturesYao Du0Carlos Mateo1Omar Tahri2Université Bourgogne, 21000 Dijon, FranceICB UMR CNRS 6303, Université Bourgogne, 21000 Dijon, FranceICB UMR CNRS 6303, Université Bourgogne, 21000 Dijon, FranceThis paper presents a visual compass method utilizing global features, specifically spherical moments. One of the primary challenges faced by photometric methods employing global features is the variation in the image caused by the appearance and disappearance of regions within the camera’s field of view as it moves. Additionally, modeling the impact of translational motion on the values of global features poses a significant challenge, as it is dependent on scene depths, particularly for non-planar scenes. To address these issues, this paper combines the utilization of image masks to mitigate abrupt changes in global feature values and the application of neural networks to tackle the modeling challenge posed by translational motion. By employing masks at various locations within the image, multiple estimations of rotation corresponding to the motion of each selected region can be obtained. Our contribution lies in offering a rapid method for implementing numerous masks on the image with real-time inference speed, rendering it suitable for embedded robot applications. Extensive experiments have been conducted on both real-world and synthetic datasets generated using Blender. The results obtained validate the accuracy, robustness, and real-time performance of the proposed method compared to a state-of-the-art method.https://www.mdpi.com/1424-8220/24/7/2246omnidirectional camerasrobotsmachine learningglobal feature extractionrobot vision systemslocalization
spellingShingle Yao Du
Carlos Mateo
Omar Tahri
A Multilayer Perceptron-Based Spherical Visual Compass Using Global Features
Sensors
omnidirectional cameras
robots
machine learning
global feature extraction
robot vision systems
localization
title A Multilayer Perceptron-Based Spherical Visual Compass Using Global Features
title_full A Multilayer Perceptron-Based Spherical Visual Compass Using Global Features
title_fullStr A Multilayer Perceptron-Based Spherical Visual Compass Using Global Features
title_full_unstemmed A Multilayer Perceptron-Based Spherical Visual Compass Using Global Features
title_short A Multilayer Perceptron-Based Spherical Visual Compass Using Global Features
title_sort multilayer perceptron based spherical visual compass using global features
topic omnidirectional cameras
robots
machine learning
global feature extraction
robot vision systems
localization
url https://www.mdpi.com/1424-8220/24/7/2246
work_keys_str_mv AT yaodu amultilayerperceptronbasedsphericalvisualcompassusingglobalfeatures
AT carlosmateo amultilayerperceptronbasedsphericalvisualcompassusingglobalfeatures
AT omartahri amultilayerperceptronbasedsphericalvisualcompassusingglobalfeatures
AT yaodu multilayerperceptronbasedsphericalvisualcompassusingglobalfeatures
AT carlosmateo multilayerperceptronbasedsphericalvisualcompassusingglobalfeatures
AT omartahri multilayerperceptronbasedsphericalvisualcompassusingglobalfeatures