Learning to See before Learning to Act: Visual Pre-training for Manipulation
Does having visual priors (e.g. the ability to detect objects) facilitate learning to perform vision-based manipulation (e.g. picking up objects)? We study this problem under the framework of transfer learning, where the model is first trained on a passive vision task (i.e., the data distribution do...
Main Authors: | Yen-Chen, Lin, Isola, Phillip John |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
Format: | Article |
Language: | English |
Published: |
IEEE
2021
|
Online Access: | https://hdl.handle.net/1721.1/129384 |
Similar Items
-
See, feel, act: hierarchical learning for complex manipulation skills with multisensory fusion
by: Fazeli, Nima, et al.
Published: (2020) -
Visual Transfer Learning for Robotic Manipulation
by: Lin, Yen-Chen
Published: (2022) -
Learning to see in the dark
by: Chen, Sihao
Published: (2021) -
Learning to see physics via visual de-animation
by: Wu, Jiajun, et al.
Published: (2021) -
SparkleVision: Seeing the world through random specular microfacets
by: Zhang, Zhengdong, et al.
Published: (2016)