Modelling the shape hierarchy for visually guided grasping

The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modelled shape tuning in visual AIP neurons and its relationship with curvature and gradient information from the caudal intraparietal area (CI...

Full description

Bibliographic Details
Main Authors: Omid eRezai, Ashley eKleinhans, Eduardo eMatallanas, Ben eSelby, Bryan P Tripp
Format: Article
Language:English
Published: Frontiers Media S.A. 2014-10-01
Series:Frontiers in Computational Neuroscience
Subjects:
Online Access:http://journal.frontiersin.org/Journal/10.3389/fncom.2014.00132/full
_version_ 1819131798280470528
author Omid eRezai
Ashley eKleinhans
Ashley eKleinhans
Eduardo eMatallanas
Ben eSelby
Bryan P Tripp
author_facet Omid eRezai
Ashley eKleinhans
Ashley eKleinhans
Eduardo eMatallanas
Ben eSelby
Bryan P Tripp
author_sort Omid eRezai
collection DOAJ
description The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modelled shape tuning in visual AIP neurons and its relationship with curvature and gradient information from the caudal intraparietal area (CIP). The main goal was to gain insight into the kinds of shape parameterizations that can account for AIP tuning and that are consistent with both the inputs to AIP and the role of AIP in grasping. We first experimented with superquadric shape parameters. We considered superquadrics because they occupy a role in robotics that is similar to AIP, in that superquadric fits are derived from visual input and used for grasp planning. We also experimented with an alternative shape parameterization that was based on an Isomap dimension reduction of spatial derivatives of depth (i.e. distance from the observer to the object surface). We considered an Isomap-based model because its parameters lacked discontinuities between similar shapes. When we matched the dimension of the Isomap to the number of superquadric parameters, the superquadric model fit the AIP data somewhat more closely. However, higher-dimensional Isomaps provided excellent fits. Also, we found that the Isomap parameters could be approximated much more accurately than superquadric parameters by feedforward neural networks with CIP-like inputs. We conclude that Isomaps, or perhaps alternative dimension reductions of visual inputs to AIP, provide a promising model of AIP electrophysiology data. However (in contrast with superquadrics) further work is needed to test whether such shape parameterizations actually provide an effective basis for grasp control.
first_indexed 2024-12-22T09:21:14Z
format Article
id doaj.art-25d652ef92ff435daace9a23e93773a7
institution Directory Open Access Journal
issn 1662-5188
language English
last_indexed 2024-12-22T09:21:14Z
publishDate 2014-10-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Computational Neuroscience
spelling doaj.art-25d652ef92ff435daace9a23e93773a72022-12-21T18:31:10ZengFrontiers Media S.A.Frontiers in Computational Neuroscience1662-51882014-10-01810.3389/fncom.2014.00132101842Modelling the shape hierarchy for visually guided graspingOmid eRezai0Ashley eKleinhans1Ashley eKleinhans2Eduardo eMatallanas3Ben eSelby4Bryan P Tripp5University of WaterlooUniversity of JohannesburgCouncil for Scientific and Industrial ResearchUniversidad Politécnica de MadridUniversity of WaterlooUniversity of WaterlooThe monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modelled shape tuning in visual AIP neurons and its relationship with curvature and gradient information from the caudal intraparietal area (CIP). The main goal was to gain insight into the kinds of shape parameterizations that can account for AIP tuning and that are consistent with both the inputs to AIP and the role of AIP in grasping. We first experimented with superquadric shape parameters. We considered superquadrics because they occupy a role in robotics that is similar to AIP, in that superquadric fits are derived from visual input and used for grasp planning. We also experimented with an alternative shape parameterization that was based on an Isomap dimension reduction of spatial derivatives of depth (i.e. distance from the observer to the object surface). We considered an Isomap-based model because its parameters lacked discontinuities between similar shapes. When we matched the dimension of the Isomap to the number of superquadric parameters, the superquadric model fit the AIP data somewhat more closely. However, higher-dimensional Isomaps provided excellent fits. Also, we found that the Isomap parameters could be approximated much more accurately than superquadric parameters by feedforward neural networks with CIP-like inputs. We conclude that Isomaps, or perhaps alternative dimension reductions of visual inputs to AIP, provide a promising model of AIP electrophysiology data. However (in contrast with superquadrics) further work is needed to test whether such shape parameterizations actually provide an effective basis for grasp control.http://journal.frontiersin.org/Journal/10.3389/fncom.2014.00132/fullAIP3D shapegraspingCIPcosine tuningsuperquadrics
spellingShingle Omid eRezai
Ashley eKleinhans
Ashley eKleinhans
Eduardo eMatallanas
Ben eSelby
Bryan P Tripp
Modelling the shape hierarchy for visually guided grasping
Frontiers in Computational Neuroscience
AIP
3D shape
grasping
CIP
cosine tuning
superquadrics
title Modelling the shape hierarchy for visually guided grasping
title_full Modelling the shape hierarchy for visually guided grasping
title_fullStr Modelling the shape hierarchy for visually guided grasping
title_full_unstemmed Modelling the shape hierarchy for visually guided grasping
title_short Modelling the shape hierarchy for visually guided grasping
title_sort modelling the shape hierarchy for visually guided grasping
topic AIP
3D shape
grasping
CIP
cosine tuning
superquadrics
url http://journal.frontiersin.org/Journal/10.3389/fncom.2014.00132/full
work_keys_str_mv AT omiderezai modellingtheshapehierarchyforvisuallyguidedgrasping
AT ashleyekleinhans modellingtheshapehierarchyforvisuallyguidedgrasping
AT ashleyekleinhans modellingtheshapehierarchyforvisuallyguidedgrasping
AT eduardoematallanas modellingtheshapehierarchyforvisuallyguidedgrasping
AT beneselby modellingtheshapehierarchyforvisuallyguidedgrasping
AT bryanptripp modellingtheshapehierarchyforvisuallyguidedgrasping