Integrating force-based manipulation primitives with deep visual servoing for robotic assembly

This paper explores the idea of combining Deep Learning-based Visual Servoing and dynamic sequences of force-based Manipulation Primitives for robotic assembly tasks. Most current peg-in-hole algorithms assume the initial peg pose is already aligned within a minute deviation range before a tight-cle...

Full description

Bibliographic Details
Main Author: Lee, Yee Sien
Other Authors: Pham Quang Cuong
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/157880
Description
Summary:This paper explores the idea of combining Deep Learning-based Visual Servoing and dynamic sequences of force-based Manipulation Primitives for robotic assembly tasks. Most current peg-in-hole algorithms assume the initial peg pose is already aligned within a minute deviation range before a tight-clearance insertion is attempted. With the integration of tactile and visual information, highly-accurate peg alignment before insertion can be achieved autonomously. In the alignment phase, the peg mounted on the end-effector can be aligned automatically from an initial pose with large displacement errors to an estimated insertion pose with errors lower than 1.5 mm in translation and 1.5° in rotation, all in one-shot Deep Learning-Based Visual Servoing estimation. If using solely Deep Learning-based Visual Servoing is not able to complete the peg-in-hole insertion, a dynamic sequence of Manipulation Primitives will then be automatically generated via Reinforcement Learning to fnish the last stage of insertion.