Towards visual ego-motion learning in robots
Many model-based Visual Odometry (VO) algorithms have been proposed in the past decade, often restricted to the type of camera optics, or the underlying motion manifold observed. We envision robots to be able to learn and perform these tasks, in a minimally supervised setting, as they gain more expe...
Main Authors: | Pillai, Sudeep, Leonard, John J |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
Format: | Article |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2019
|
Online Access: | http://hdl.handle.net/1721.1/119893 https://orcid.org/0000-0001-7198-1772 https://orcid.org/0000-0002-8863-6550 |
Similar Items
-
Learning articulated motions from visual demonstration
by: Pillai, Sudeep
Published: (2014) -
Mobile robot ego motion estimation using RANSAC-based ceiling vision
by: Wang, Han, et al.
Published: (2013) -
SLAM-aware, self-supervised perception in mobile robots
by: Pillai, Sudeep
Published: (2018) -
Exploring Developmental Change in Ego-Motion Experience Across Infancy
by: Fuchs, Ariel
Published: (2024) -
Monocular SLAM Supported Object Recognition
by: Pillai, Sudeep, et al.
Published: (2015)