Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2017-11-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/17/11/2567 |
_version_ | 1811305963453415424 |
---|---|
author | Jin-Chun Piao Shin-Dug Kim |
author_facet | Jin-Chun Piao Shin-Dug Kim |
author_sort | Jin-Chun Piao |
collection | DOAJ |
description | Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. |
first_indexed | 2024-04-13T08:35:58Z |
format | Article |
id | doaj.art-c8d2d15a488746f0b1cd88a95ea898db |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-04-13T08:35:58Z |
publishDate | 2017-11-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-c8d2d15a488746f0b1cd88a95ea898db2022-12-22T02:54:05ZengMDPI AGSensors1424-82202017-11-011711256710.3390/s17112567s17112567Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile DevicesJin-Chun Piao0Shin-Dug Kim1Department of Computer Science, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, KoreaDepartment of Computer Science, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, KoreaSimultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.https://www.mdpi.com/1424-8220/17/11/2567monocular simultaneous localization and mappingvisual–inertial odometryoptical flowadaptive executionmobile device |
spellingShingle | Jin-Chun Piao Shin-Dug Kim Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices Sensors monocular simultaneous localization and mapping visual–inertial odometry optical flow adaptive execution mobile device |
title | Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices |
title_full | Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices |
title_fullStr | Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices |
title_full_unstemmed | Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices |
title_short | Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices |
title_sort | adaptive monocular visual inertial slam for real time augmented reality applications in mobile devices |
topic | monocular simultaneous localization and mapping visual–inertial odometry optical flow adaptive execution mobile device |
url | https://www.mdpi.com/1424-8220/17/11/2567 |
work_keys_str_mv | AT jinchunpiao adaptivemonocularvisualinertialslamforrealtimeaugmentedrealityapplicationsinmobiledevices AT shindugkim adaptivemonocularvisualinertialslamforrealtimeaugmentedrealityapplicationsinmobiledevices |