Towards visual ego-motion learning in robots
Many model-based Visual Odometry (VO) algorithms have been proposed in the past decade, often restricted to the type of camera optics, or the underlying motion manifold observed. We envision robots to be able to learn and perform these tasks, in a minimally supervised setting, as they gain more expe...
Main Authors: | Pillai, Sudeep, Leonard, John J |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
Format: | Article |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2019
|
Online Access: | http://hdl.handle.net/1721.1/119893 https://orcid.org/0000-0001-7198-1772 https://orcid.org/0000-0002-8863-6550 |
Similar Items
-
Learning articulated motions from visual demonstration
by: Pillai, Sudeep
Published: (2014) -
Learning-based geometry-aware ego-motion estimation
by: Almalioglu, Y
Published: (2021) -
Mobile robot ego motion estimation using RANSAC-based ceiling vision
by: Wang, Han, et al.
Published: (2013) -
Towards Accurate Ground Plane Normal Estimation from Ego-Motion
by: Jiaxin Zhang, et al.
Published: (2022-12-01) -
Maximum Likelihood Estimation of Monocular Optical Flow Field for Mobile Robot Ego-Motion
by: Huajun Liu, et al.
Published: (2016-01-01)