Scene images and text information‐based object location of robot grasping

Abstract With the rapid development of artificial intelligence (AI), the application of this technology in the medical field is becoming increasingly extensive, along with a gradual increase in the amount of intelligent equipment in hospitals. Service robots can save human resources and replace nurs...

Full description

Bibliographic Details
Main Authors: Zhichao Liu, Kaixuan Ding, Qingyang Xu, Yong Song, Xianfeng Yuan, Yibin Li
Format: Article
Language:English
Published: Wiley 2022-06-01
Series:IET Cyber-systems and Robotics
Subjects:
Online Access:https://doi.org/10.1049/csy2.12049
Description
Summary:Abstract With the rapid development of artificial intelligence (AI), the application of this technology in the medical field is becoming increasingly extensive, along with a gradual increase in the amount of intelligent equipment in hospitals. Service robots can save human resources and replace nursing staff to achieve some work. In view of the phenomenon of mobile service robots' grabbing and distribution of patients' drugs in hospitals, a real‐time object detection and positioning system based on image and text information is proposed, which realizes the precise positioning and tracking of the grabbing objects and completes the grasping of a specific object (medicine bottle). The lightweight object detection model NanoDet is used to learn the features of the grasping objects and the object category, and bounding boxes are regressed. Then, the images in the bounding boxes are enhanced to overcome unfavourable factors, such as a small object region. The text detection and recognition model PP‐OCR is used to detect and recognise the enhanced images and extract the text information. The object information provided by the two models is fused, and the text recognition result is matched with the object detection box to achieve the precise positioning of the grasping object. The kernel correlation filter (KCF) tracking algorithm is introduced to achieve real‐time tracking of specific objects to precisely control the robot's grasping. Both deep learning models adopt lightweight networks to facilitate direct deployment. The experiments show that the proposed robot grasping detection system has high reliability, accuracy and real‐time performance.
ISSN:2631-6315