Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning

Robotic grasping for cluttered tasks and heterogeneous targets is not satisfied by the deep learning that has been developed in the last decade. The main problem lies in intelligence, which is stagnant, even though it has a high accuracy rate in usual environment; however, the cluttered grasping env...

Full description

Bibliographic Details
Main Authors: Muslikhin, Jenq-Ruey Horng, Szu-Yueh Yang, Ming-Shyan Wang
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9622215/
_version_ 1818858602548428800
author Muslikhin
Jenq-Ruey Horng
Szu-Yueh Yang
Ming-Shyan Wang
author_facet Muslikhin
Jenq-Ruey Horng
Szu-Yueh Yang
Ming-Shyan Wang
author_sort Muslikhin
collection DOAJ
description Robotic grasping for cluttered tasks and heterogeneous targets is not satisfied by the deep learning that has been developed in the last decade. The main problem lies in intelligence, which is stagnant, even though it has a high accuracy rate in usual environment; however, the cluttered grasping environment is very irregular. In this paper, an action learning for robotic grasping using eye-in-hand coordination is developed to grasp the cluttered and wide range of various objects using 6 degree-of-freedom (DOF) robotic manipulator equipped with a three-finger gripper. To involve action learning in this system, k-Nearest Neighbor (kNN), Disparity Map (DM), and You Only Look Once (YOLO) were needed. After successfully formulating the learning cycle, an instrument assesses the robot’s environment and performance with qualitative weightings. Some experiments of measuring the depth of the target, localization of target variations, target detection, and the gripping process itself were conducted. The entire process is spread out in plan, act, observe, and reflect for each action learning cycle. If the first cycle does not suffice the results according to the minimum pass standard, the cycle will renew until the robot succeeds in picking and placing. Furthermore, this study demonstrated that the action learning-based object manipulation system with stereo-like vision and eye-in-hand calibration can improve intelligence over previous errors with acceptable errors. Thus, action learning might be applicable to other object manipulation systems without having to define the environment first.
first_indexed 2024-12-19T08:58:54Z
format Article
id doaj.art-749f7dd7ae2046d8a728f4e0c6b8a779
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-19T08:58:54Z
publishDate 2021-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-749f7dd7ae2046d8a728f4e0c6b8a7792022-12-21T20:28:32ZengIEEEIEEE Access2169-35362021-01-01915642215643610.1109/ACCESS.2021.31294749622215Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning Muslikhin0https://orcid.org/0000-0001-7659-5491Jenq-Ruey Horng1Szu-Yueh Yang2Ming-Shyan Wang3https://orcid.org/0000-0002-7408-6420Department of Electrical Engineering, Southern Taiwan University of Science and Technology, Tainan, TaiwanDepartment of Electrical Engineering, Southern Taiwan University of Science and Technology, Tainan, TaiwanDepartment of Electrical Engineering, Southern Taiwan University of Science and Technology, Tainan, TaiwanDepartment of Electrical Engineering, Southern Taiwan University of Science and Technology, Tainan, TaiwanRobotic grasping for cluttered tasks and heterogeneous targets is not satisfied by the deep learning that has been developed in the last decade. The main problem lies in intelligence, which is stagnant, even though it has a high accuracy rate in usual environment; however, the cluttered grasping environment is very irregular. In this paper, an action learning for robotic grasping using eye-in-hand coordination is developed to grasp the cluttered and wide range of various objects using 6 degree-of-freedom (DOF) robotic manipulator equipped with a three-finger gripper. To involve action learning in this system, k-Nearest Neighbor (kNN), Disparity Map (DM), and You Only Look Once (YOLO) were needed. After successfully formulating the learning cycle, an instrument assesses the robot’s environment and performance with qualitative weightings. Some experiments of measuring the depth of the target, localization of target variations, target detection, and the gripping process itself were conducted. The entire process is spread out in plan, act, observe, and reflect for each action learning cycle. If the first cycle does not suffice the results according to the minimum pass standard, the cycle will renew until the robot succeeds in picking and placing. Furthermore, this study demonstrated that the action learning-based object manipulation system with stereo-like vision and eye-in-hand calibration can improve intelligence over previous errors with acceptable errors. Thus, action learning might be applicable to other object manipulation systems without having to define the environment first.https://ieeexplore.ieee.org/document/9622215/Action learningdeep learningeye-in-hand manipulatork-nearest neighborrobotic manipulatorrobotic grasping
spellingShingle Muslikhin
Jenq-Ruey Horng
Szu-Yueh Yang
Ming-Shyan Wang
Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning
IEEE Access
Action learning
deep learning
eye-in-hand manipulator
k-nearest neighbor
robotic manipulator
robotic grasping
title Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning
title_full Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning
title_fullStr Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning
title_full_unstemmed Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning
title_short Self-Correction for Eye-In-Hand Robotic Grasping Using Action Learning
title_sort self correction for eye in hand robotic grasping using action learning
topic Action learning
deep learning
eye-in-hand manipulator
k-nearest neighbor
robotic manipulator
robotic grasping
url https://ieeexplore.ieee.org/document/9622215/
work_keys_str_mv AT muslikhin selfcorrectionforeyeinhandroboticgraspingusingactionlearning
AT jenqrueyhorng selfcorrectionforeyeinhandroboticgraspingusingactionlearning
AT szuyuehyang selfcorrectionforeyeinhandroboticgraspingusingactionlearning
AT mingshyanwang selfcorrectionforeyeinhandroboticgraspingusingactionlearning