Visuo-Tactile Perception for Dexterous Robotic Manipulation

In this thesis, we develop visuo-tactile perception to enable general and precise robotic manipulation. In particular, we want to study how to effectively process visual and tactile information to allow robots to expand their capabilities while remaining accurate and reliable. We begin our work b...

Full description

Bibliographic Details
Main Author: Bauza Villalonga, Maria
Other Authors: Rodriguez, Alberto
Format: Thesis
Published: Massachusetts Institute of Technology 2023
Online Access:https://hdl.handle.net/1721.1/147351
Description
Summary:In this thesis, we develop visuo-tactile perception to enable general and precise robotic manipulation. In particular, we want to study how to effectively process visual and tactile information to allow robots to expand their capabilities while remaining accurate and reliable. We begin our work by focusing on developing tools for tactile perception. For the task of grasping, we use tactile observations to assess and improve grasp stability. Tactile information also allows extracting geometric information from contacts which is a task-independent feature. By learning to map tactile observations to contact shapes, we show robots can reconstruct accurate 3D models of objects, which can later be used for pose estimation. We build on the idea of using geometric information from contacts by developing tools that accurately render contact geometry in simulation. This enables us to develop a probabilistic approach to pose estimation for novel objects based on matching real visuo-tactile observations to a set of simulated ones. As a result, our method does not rely on real data and yields accurate pose distributions. Finally, we demonstrate how this approach to perception enables precise manipulations. In particular, we consider the task of precise pick-and-place of novel objects. Combining perception with task-aware planning, we build a robotic system that identifies in simulation which object grasps will facilitate grasping, planning, and perception; and selects the best one during execution. Our approach adapts to new objects by learning object-dependent models purely in simulation, allowing a robot to manipulate new objects successfully and perform highly accurate placements.