A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning
To improve localization and pose precision of visual–inertial simultaneous localization and mapping (viSLAM) in complex scenarios, it is necessary to tune the weights of the visual and inertial inputs during sensor fusion. To this end, we propose a resilient viSLAM algorithm based on covariance tuni...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-12-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/22/24/9836 |
_version_ | 1797455304774909952 |
---|---|
author | Kailin Li Jiansheng Li Ancheng Wang Haolong Luo Xueqiang Li Zidi Yang |
author_facet | Kailin Li Jiansheng Li Ancheng Wang Haolong Luo Xueqiang Li Zidi Yang |
author_sort | Kailin Li |
collection | DOAJ |
description | To improve localization and pose precision of visual–inertial simultaneous localization and mapping (viSLAM) in complex scenarios, it is necessary to tune the weights of the visual and inertial inputs during sensor fusion. To this end, we propose a resilient viSLAM algorithm based on covariance tuning. During back-end optimization of the viSLAM process, the unit-weight root-mean-square error (RMSE) of the visual reprojection and IMU preintegration in each optimization is computed to construct a covariance tuning function, producing a new covariance matrix. This is used to perform another round of nonlinear optimization, effectively improving pose and localization precision without closed-loop detection. In the validation experiment, our algorithm outperformed the OKVIS, R-VIO, and VINS-Mono open-source viSLAM frameworks in pose and localization precision on the EuRoc dataset, at all difficulty levels. |
first_indexed | 2024-03-09T15:52:31Z |
format | Article |
id | doaj.art-5924829004da431b9cf07918e5b70f25 |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-09T15:52:31Z |
publishDate | 2022-12-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-5924829004da431b9cf07918e5b70f252023-11-24T17:56:08ZengMDPI AGSensors1424-82202022-12-012224983610.3390/s22249836A Resilient Method for Visual–Inertial Fusion Based on Covariance TuningKailin Li0Jiansheng Li1Ancheng Wang2Haolong Luo3Xueqiang Li4Zidi Yang5Institute of Geospatial Information, Information Engineering University, Zhengzhou 450001, ChinaInstitute of Geospatial Information, Information Engineering University, Zhengzhou 450001, ChinaInstitute of Geospatial Information, Information Engineering University, Zhengzhou 450001, ChinaInstitute of Geospatial Information, Information Engineering University, Zhengzhou 450001, ChinaInstitute of Geospatial Information, Information Engineering University, Zhengzhou 450001, ChinaInstitute of Geospatial Information, Information Engineering University, Zhengzhou 450001, ChinaTo improve localization and pose precision of visual–inertial simultaneous localization and mapping (viSLAM) in complex scenarios, it is necessary to tune the weights of the visual and inertial inputs during sensor fusion. To this end, we propose a resilient viSLAM algorithm based on covariance tuning. During back-end optimization of the viSLAM process, the unit-weight root-mean-square error (RMSE) of the visual reprojection and IMU preintegration in each optimization is computed to construct a covariance tuning function, producing a new covariance matrix. This is used to perform another round of nonlinear optimization, effectively improving pose and localization precision without closed-loop detection. In the validation experiment, our algorithm outperformed the OKVIS, R-VIO, and VINS-Mono open-source viSLAM frameworks in pose and localization precision on the EuRoc dataset, at all difficulty levels.https://www.mdpi.com/1424-8220/22/24/9836resilient sensor fusionsimultaneous localization and mappingvisual–inertial fusionnonlinear optimizationcovariance tuning |
spellingShingle | Kailin Li Jiansheng Li Ancheng Wang Haolong Luo Xueqiang Li Zidi Yang A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning Sensors resilient sensor fusion simultaneous localization and mapping visual–inertial fusion nonlinear optimization covariance tuning |
title | A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning |
title_full | A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning |
title_fullStr | A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning |
title_full_unstemmed | A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning |
title_short | A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning |
title_sort | resilient method for visual inertial fusion based on covariance tuning |
topic | resilient sensor fusion simultaneous localization and mapping visual–inertial fusion nonlinear optimization covariance tuning |
url | https://www.mdpi.com/1424-8220/22/24/9836 |
work_keys_str_mv | AT kailinli aresilientmethodforvisualinertialfusionbasedoncovariancetuning AT jianshengli aresilientmethodforvisualinertialfusionbasedoncovariancetuning AT anchengwang aresilientmethodforvisualinertialfusionbasedoncovariancetuning AT haolongluo aresilientmethodforvisualinertialfusionbasedoncovariancetuning AT xueqiangli aresilientmethodforvisualinertialfusionbasedoncovariancetuning AT zidiyang aresilientmethodforvisualinertialfusionbasedoncovariancetuning AT kailinli resilientmethodforvisualinertialfusionbasedoncovariancetuning AT jianshengli resilientmethodforvisualinertialfusionbasedoncovariancetuning AT anchengwang resilientmethodforvisualinertialfusionbasedoncovariancetuning AT haolongluo resilientmethodforvisualinertialfusionbasedoncovariancetuning AT xueqiangli resilientmethodforvisualinertialfusionbasedoncovariancetuning AT zidiyang resilientmethodforvisualinertialfusionbasedoncovariancetuning |