Monocular image space tracking on a computationally limited MAV

We propose a method of monocular camera-inertial based navigation for computationally limited micro air vehicles (MAVs). Our approach is derived from the recent development of parallel tracking and mapping algorithms, but unlike previous results, we show how the tracking and mapping processes operat...

Full description

Bibliographic Details
Main Authors: Ok, Kyel, Gamage, Dinesh, Drummond, Tom, Dellaert, Frank, Roy, Nicholas
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:en_US
Published: Institute of Electrical and Electronics Engineers (IEEE) 2017
Online Access:http://hdl.handle.net/1721.1/107201
https://orcid.org/0000-0001-9840-0552
https://orcid.org/0000-0002-8293-0492
Description
Summary:We propose a method of monocular camera-inertial based navigation for computationally limited micro air vehicles (MAVs). Our approach is derived from the recent development of parallel tracking and mapping algorithms, but unlike previous results, we show how the tracking and mapping processes operate using different representations. The separation of representations allows us not only to move the computational load of full map inference to a ground station, but to further reduce the computational cost of on-board tracking for pose estimation. Our primary contribution is to show how the cost of tracking the vehicle pose on-board can be substantially reduced by estimating the camera motion directly in the image frame, rather than in the world co-ordinate frame. We demonstrate our method on an Ascending Technologies Pelican quad-rotor, and show that we can track the vehicle pose with reduced on-board computation but without compromised navigation accuracy.