Neuromorphic Eye-in-Hand Visual Servoing

Robotic vision plays a major role in factory automation to service robot applications. However, the traditional use of frame-based cameras sets a limitation on continuous visual feedback due to their low sampling rate, poor performance in low light conditions and redundant data in real-time image pr...

Full description

Bibliographic Details
Main Authors: Rajkumar Muthusamy, Abdulla Ayyad, Mohamad Halwani, Dewald Swart, Dongming Gan, Lakmal Seneviratne, Yahya Zweiri
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9395430/
_version_ 1818327906145796096
author Rajkumar Muthusamy
Abdulla Ayyad
Mohamad Halwani
Dewald Swart
Dongming Gan
Lakmal Seneviratne
Yahya Zweiri
author_facet Rajkumar Muthusamy
Abdulla Ayyad
Mohamad Halwani
Dewald Swart
Dongming Gan
Lakmal Seneviratne
Yahya Zweiri
author_sort Rajkumar Muthusamy
collection DOAJ
description Robotic vision plays a major role in factory automation to service robot applications. However, the traditional use of frame-based cameras sets a limitation on continuous visual feedback due to their low sampling rate, poor performance in low light conditions and redundant data in real-time image processing, especially in the case of high-speed tasks. Neuromorphic event-based vision is a recent technology that gives human-like vision capabilities such as observing the dynamic changes asynchronously at a high temporal resolution (<inline-formula> <tex-math notation="LaTeX">$1~\mu s$ </tex-math></inline-formula>) with low latency and wide dynamic range. In this paper, for the first time, we present a purely event-based visual servoing method using a neuromorphic camera in an eye-in-hand configuration for the grasping pipeline of a robotic manipulator. We devise three surface layers of active events to directly process the incoming stream of events from relative motion. A purely event-based approach is used to detect corner features, localize them robustly using heatmaps and generate virtual features for tracking and grasp alignment. Based on the visual feedback, the motion of the robot is controlled to make the temporal upcoming event features converge to the desired event in Spatio-temporal space. The controller switches its operation such that it explores the workspace, reaches the target object and achieves a stable grasp. The event-based visual servoing (EBVS) method is comprehensively studied and validated experimentally using a commercial robot manipulator in an eye-in-hand configuration for both static and dynamic targets. Experimental results show superior performance of the EBVS method over frame-based vision, especially in high-speed operations and poor lighting conditions. As such, EBVS overcomes the issues of motion blur, lighting and exposure timing that exist in conventional frame-based visual servoing methods.
first_indexed 2024-12-13T12:23:42Z
format Article
id doaj.art-ede817802b7d45808097c0b42b762785
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-13T12:23:42Z
publishDate 2021-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-ede817802b7d45808097c0b42b7627852022-12-21T23:46:31ZengIEEEIEEE Access2169-35362021-01-019558535587010.1109/ACCESS.2021.30712619395430Neuromorphic Eye-in-Hand Visual ServoingRajkumar Muthusamy0https://orcid.org/0000-0002-5372-0154Abdulla Ayyad1https://orcid.org/0000-0002-3006-2320Mohamad Halwani2https://orcid.org/0000-0002-8478-2729Dewald Swart3Dongming Gan4https://orcid.org/0000-0001-5327-1902Lakmal Seneviratne5https://orcid.org/0000-0001-6405-8402Yahya Zweiri6https://orcid.org/0000-0003-4331-7254Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University of Science and Technology, Abu Dhabi, United Arab EmiratesKhalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University of Science and Technology, Abu Dhabi, United Arab EmiratesKhalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University of Science and Technology, Abu Dhabi, United Arab EmiratesResearch and Development, Strata Manufacturing PJSC, Al Ain, United Arab EmiratesSchool of Engineering Technology, Purdue University, West Lafayette, IN, USAKhalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University of Science and Technology, Abu Dhabi, United Arab EmiratesKhalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University of Science and Technology, Abu Dhabi, United Arab EmiratesRobotic vision plays a major role in factory automation to service robot applications. However, the traditional use of frame-based cameras sets a limitation on continuous visual feedback due to their low sampling rate, poor performance in low light conditions and redundant data in real-time image processing, especially in the case of high-speed tasks. Neuromorphic event-based vision is a recent technology that gives human-like vision capabilities such as observing the dynamic changes asynchronously at a high temporal resolution (<inline-formula> <tex-math notation="LaTeX">$1~\mu s$ </tex-math></inline-formula>) with low latency and wide dynamic range. In this paper, for the first time, we present a purely event-based visual servoing method using a neuromorphic camera in an eye-in-hand configuration for the grasping pipeline of a robotic manipulator. We devise three surface layers of active events to directly process the incoming stream of events from relative motion. A purely event-based approach is used to detect corner features, localize them robustly using heatmaps and generate virtual features for tracking and grasp alignment. Based on the visual feedback, the motion of the robot is controlled to make the temporal upcoming event features converge to the desired event in Spatio-temporal space. The controller switches its operation such that it explores the workspace, reaches the target object and achieves a stable grasp. The event-based visual servoing (EBVS) method is comprehensively studied and validated experimentally using a commercial robot manipulator in an eye-in-hand configuration for both static and dynamic targets. Experimental results show superior performance of the EBVS method over frame-based vision, especially in high-speed operations and poor lighting conditions. As such, EBVS overcomes the issues of motion blur, lighting and exposure timing that exist in conventional frame-based visual servoing methods.https://ieeexplore.ieee.org/document/9395430/Neuromorphic vision sensorevent cameraevent-based visual servoingrobotic visionrobotic manipulatorneuromorphic vision-based robot control
spellingShingle Rajkumar Muthusamy
Abdulla Ayyad
Mohamad Halwani
Dewald Swart
Dongming Gan
Lakmal Seneviratne
Yahya Zweiri
Neuromorphic Eye-in-Hand Visual Servoing
IEEE Access
Neuromorphic vision sensor
event camera
event-based visual servoing
robotic vision
robotic manipulator
neuromorphic vision-based robot control
title Neuromorphic Eye-in-Hand Visual Servoing
title_full Neuromorphic Eye-in-Hand Visual Servoing
title_fullStr Neuromorphic Eye-in-Hand Visual Servoing
title_full_unstemmed Neuromorphic Eye-in-Hand Visual Servoing
title_short Neuromorphic Eye-in-Hand Visual Servoing
title_sort neuromorphic eye in hand visual servoing
topic Neuromorphic vision sensor
event camera
event-based visual servoing
robotic vision
robotic manipulator
neuromorphic vision-based robot control
url https://ieeexplore.ieee.org/document/9395430/
work_keys_str_mv AT rajkumarmuthusamy neuromorphiceyeinhandvisualservoing
AT abdullaayyad neuromorphiceyeinhandvisualservoing
AT mohamadhalwani neuromorphiceyeinhandvisualservoing
AT dewaldswart neuromorphiceyeinhandvisualservoing
AT dongminggan neuromorphiceyeinhandvisualservoing
AT lakmalseneviratne neuromorphiceyeinhandvisualservoing
AT yahyazweiri neuromorphiceyeinhandvisualservoing