More Than a Feeling: Learning to Grasp and Regrasp Using Vision and Touch
For humans, the process of grasping an object relies heavily on rich tactile feedback. Most recent robotic grasping work, however, has been based only on visual input, and thus cannot easily benefit from feedback after initiating contact. In this letter, we investigate how a robot can learn to use t...
Main Authors: | Calandra, Roberto, Owens, Andrew, Jayaraman, Dinesh, Lin, Justin, Yuan, Wenzhen, Malik, Jitendra, Adelson, Edward H, Levine, Sergey |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
Format: | Article |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2020
|
Online Access: | https://hdl.handle.net/1721.1/126806 |
Similar Items
-
Tactile Regrasp: Grasp Adjustments via Simulated Tactile Transformations
by: Hogan, Francois R., et al.
Published: (2021) -
Tactile Regrasp: Grasp Adjustments via Simulated Tactile Transformations
by: Hogan, Francois R., et al.
Published: (2021) -
Regrasping by Fixtureless Fixturing
by: Chavan Dafle, Nikhil Narsingh, et al.
Published: (2020) -
3D shape perception from monocular vision, touch, and shape priors
by: Wang, Shaoxiong, et al.
Published: (2020) -
Tactile regrasp of objects with dynamic center-of-mass
by: Than, Duc Huy
Published: (2023)