Recognition of Grasping Patterns Using Deep Learning for Human–Robot Collaboration

Recent advances in the field of collaborative robotics aim to endow industrial robots with prediction and anticipation abilities. In many shared tasks, the robot’s ability to accurately perceive and recognize the objects being manipulated by the human operator is crucial to make predictions about th...

Cur síos iomlán

Sonraí bibleagrafaíochta
Príomhchruthaitheoirí: Pedro Amaral, Filipe Silva, Vítor Santos
Formáid: Alt
Teanga:English
Foilsithe / Cruthaithe: MDPI AG 2023-11-01
Sraith:Sensors
Ábhair:
Rochtain ar líne:https://www.mdpi.com/1424-8220/23/21/8989
_version_ 1827765367024386048
author Pedro Amaral
Filipe Silva
Vítor Santos
author_facet Pedro Amaral
Filipe Silva
Vítor Santos
author_sort Pedro Amaral
collection DOAJ
description Recent advances in the field of collaborative robotics aim to endow industrial robots with prediction and anticipation abilities. In many shared tasks, the robot’s ability to accurately perceive and recognize the objects being manipulated by the human operator is crucial to make predictions about the operator’s intentions. In this context, this paper proposes a novel learning-based framework to enable an assistive robot to recognize the object grasped by the human operator based on the pattern of the hand and finger joints. The framework combines the strengths of the commonly available software MediaPipe in detecting hand landmarks in an RGB image with a deep multi-class classifier that predicts the manipulated object from the extracted keypoints. This study focuses on the comparison between two deep architectures, a convolutional neural network and a transformer, in terms of prediction accuracy, precision, recall and F1-score. We test the performance of the recognition system on a new dataset collected with different users and in different sessions. The results demonstrate the effectiveness of the proposed methods, while providing valuable insights into the factors that limit the generalization ability of the models.
first_indexed 2024-03-11T11:20:57Z
format Article
id doaj.art-fa6886b5bc6e4ad0ba7da882a3d2464d
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-11T11:20:57Z
publishDate 2023-11-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-fa6886b5bc6e4ad0ba7da882a3d2464d2023-11-10T15:13:03ZengMDPI AGSensors1424-82202023-11-012321898910.3390/s23218989Recognition of Grasping Patterns Using Deep Learning for Human–Robot CollaborationPedro Amaral0Filipe Silva1Vítor Santos2Department of Electronics, Telecommunications and Informatics (DETI), Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro, 3810-193 Aveiro, PortugalDepartment of Electronics, Telecommunications and Informatics (DETI), Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro, 3810-193 Aveiro, PortugalDepartment of Mechanical Engineering (DEM), Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro, 3810-193 Aveiro, PortugalRecent advances in the field of collaborative robotics aim to endow industrial robots with prediction and anticipation abilities. In many shared tasks, the robot’s ability to accurately perceive and recognize the objects being manipulated by the human operator is crucial to make predictions about the operator’s intentions. In this context, this paper proposes a novel learning-based framework to enable an assistive robot to recognize the object grasped by the human operator based on the pattern of the hand and finger joints. The framework combines the strengths of the commonly available software MediaPipe in detecting hand landmarks in an RGB image with a deep multi-class classifier that predicts the manipulated object from the extracted keypoints. This study focuses on the comparison between two deep architectures, a convolutional neural network and a transformer, in terms of prediction accuracy, precision, recall and F1-score. We test the performance of the recognition system on a new dataset collected with different users and in different sessions. The results demonstrate the effectiveness of the proposed methods, while providing valuable insights into the factors that limit the generalization ability of the models.https://www.mdpi.com/1424-8220/23/21/8989collaborative roboticsobject recognitionhand–object interactiongrasping posturekeypoints classification
spellingShingle Pedro Amaral
Filipe Silva
Vítor Santos
Recognition of Grasping Patterns Using Deep Learning for Human–Robot Collaboration
Sensors
collaborative robotics
object recognition
hand–object interaction
grasping posture
keypoints classification
title Recognition of Grasping Patterns Using Deep Learning for Human–Robot Collaboration
title_full Recognition of Grasping Patterns Using Deep Learning for Human–Robot Collaboration
title_fullStr Recognition of Grasping Patterns Using Deep Learning for Human–Robot Collaboration
title_full_unstemmed Recognition of Grasping Patterns Using Deep Learning for Human–Robot Collaboration
title_short Recognition of Grasping Patterns Using Deep Learning for Human–Robot Collaboration
title_sort recognition of grasping patterns using deep learning for human robot collaboration
topic collaborative robotics
object recognition
hand–object interaction
grasping posture
keypoints classification
url https://www.mdpi.com/1424-8220/23/21/8989
work_keys_str_mv AT pedroamaral recognitionofgraspingpatternsusingdeeplearningforhumanrobotcollaboration
AT filipesilva recognitionofgraspingpatternsusingdeeplearningforhumanrobotcollaboration
AT vitorsantos recognitionofgraspingpatternsusingdeeplearningforhumanrobotcollaboration