Learning-driven coarse-to-fine articulated robot tracking
In this work we present an articulated tracking approach for robotic manipulators, which relies only on visual cues from colour and depth images to estimate the robot’s state when interacting with or being occluded by its environment. We hypothesise that articulated model fitting approaches can only...
Main Authors: | , , , , |
---|---|
Format: | Conference item |
Published: |
IEEE
2019
|
_version_ | 1826274745776078848 |
---|---|
author | Rauch, C Ivan, V Hospedales, T Shotton, J Fallon, M |
author_facet | Rauch, C Ivan, V Hospedales, T Shotton, J Fallon, M |
author_sort | Rauch, C |
collection | OXFORD |
description | In this work we present an articulated tracking approach for robotic manipulators, which relies only on visual cues from colour and depth images to estimate the robot’s state when interacting with or being occluded by its environment. We hypothesise that articulated model fitting approaches can only achieve accurate tracking if subpixel-level accurate correspondences between observed and estimated state can be established. Previous work in this area has exclusively relied on either discriminative depth information or colour edge correspondences as tracking objective and required initialisation from joint encoders. In this paper we propose a coarse-to-fine articulated state estimator, which relies only on visual cues from colour edges and learned depth keypoints, and which is initialised from a robot state distribution predicted from a depth image. We evaluate our approach on four RGB-D sequences showing a KUICA LWR arm with a Schunk SDH2 hand interacting with its environment and demonstrate that this combined keypoint and edge tracking objective can estimate the palm position with an average error of 2. 5cm without using any joint encoder sensing. |
first_indexed | 2024-03-06T22:48:07Z |
format | Conference item |
id | oxford-uuid:5de51520-cc7b-4dce-84c9-3f92a643f868 |
institution | University of Oxford |
last_indexed | 2024-03-06T22:48:07Z |
publishDate | 2019 |
publisher | IEEE |
record_format | dspace |
spelling | oxford-uuid:5de51520-cc7b-4dce-84c9-3f92a643f8682022-03-26T17:37:04ZLearning-driven coarse-to-fine articulated robot trackingConference itemhttp://purl.org/coar/resource_type/c_5794uuid:5de51520-cc7b-4dce-84c9-3f92a643f868Symplectic Elements at OxfordIEEE2019Rauch, CIvan, VHospedales, TShotton, JFallon, MIn this work we present an articulated tracking approach for robotic manipulators, which relies only on visual cues from colour and depth images to estimate the robot’s state when interacting with or being occluded by its environment. We hypothesise that articulated model fitting approaches can only achieve accurate tracking if subpixel-level accurate correspondences between observed and estimated state can be established. Previous work in this area has exclusively relied on either discriminative depth information or colour edge correspondences as tracking objective and required initialisation from joint encoders. In this paper we propose a coarse-to-fine articulated state estimator, which relies only on visual cues from colour edges and learned depth keypoints, and which is initialised from a robot state distribution predicted from a depth image. We evaluate our approach on four RGB-D sequences showing a KUICA LWR arm with a Schunk SDH2 hand interacting with its environment and demonstrate that this combined keypoint and edge tracking objective can estimate the palm position with an average error of 2. 5cm without using any joint encoder sensing. |
spellingShingle | Rauch, C Ivan, V Hospedales, T Shotton, J Fallon, M Learning-driven coarse-to-fine articulated robot tracking |
title | Learning-driven coarse-to-fine articulated robot tracking |
title_full | Learning-driven coarse-to-fine articulated robot tracking |
title_fullStr | Learning-driven coarse-to-fine articulated robot tracking |
title_full_unstemmed | Learning-driven coarse-to-fine articulated robot tracking |
title_short | Learning-driven coarse-to-fine articulated robot tracking |
title_sort | learning driven coarse to fine articulated robot tracking |
work_keys_str_mv | AT rauchc learningdrivencoarsetofinearticulatedrobottracking AT ivanv learningdrivencoarsetofinearticulatedrobottracking AT hospedalest learningdrivencoarsetofinearticulatedrobottracking AT shottonj learningdrivencoarsetofinearticulatedrobottracking AT fallonm learningdrivencoarsetofinearticulatedrobottracking |