Deep learning of monocular depth, optical flow and ego-motion with geometric guidance for UAV navigation in dynamic environments

Computer vision-based depth estimation and visual odometry provide perceptual information useful for robot navigation tasks like obstacle avoidance. However, despite the proliferation of state-of-the-art convolutional neural network (CNN) models for monocular depth, ego-motion and optical flow estim...

Full description

Bibliographic Details
Main Authors: Fuseini Mumuni, Alhassan Mumuni, Christian Kwaku Amuzuvi
Format: Article
Language:English
Published: Elsevier 2022-12-01
Series:Machine Learning with Applications
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2666827022000913
Description
Summary:Computer vision-based depth estimation and visual odometry provide perceptual information useful for robot navigation tasks like obstacle avoidance. However, despite the proliferation of state-of-the-art convolutional neural network (CNN) models for monocular depth, ego-motion and optical flow estimation, a relatively low volume of work has been reported on their practical applications in unmanned aerial vehicle (UAV) navigation. This is due to well-known challenges — embedded hardware constraints, viewpoint variations, scarcity of aerial image datasets, and intricacies of dynamic environments. We address these limitations to facilitate real-world deployment of CNN in UAV navigation. First, we devise efficient confidence weighted adaptive network (Cowan) training framework that iteratively leverages intermediate prediction confidences to enforce cross-task consistency over corresponding image regions. This achieves competitive accuracy with a lightweight CNN capable of real-time execution on resource-constrained embedded systems. Second, we devise a test-time refinement method that adapts the network to dynamic environments while simultaneously improving accuracy. To accomplish this, we first update ego-motion using pose information from on-board inertial measurement unit (IMU). Then, we decompose the UAV’s motion into constituent vectors, and for each axis, we formulate geometric relationships between depth and translation. Based on this information, we triangulate corresponding points acquired through optical flow. Finally, we enforce geometric consistency between the initially updated pose and triangulated depth. Cowan with geometric guided refinement (Cowan-GGR) achieves significant accuracy and robustness. Field tests show the proposed model is capable of accurate depth and object-level motion perception in real-world dynamic environments, thus proving its efficacy in facilitating UAV navigation.
ISSN:2666-8270