Spatiotemporal transformations for gaze control

Abstract Sensorimotor transformations require spatiotemporal coordination of signals, that is, through both time and space. For example, the gaze control system employs signals that are time‐locked to various sensorimotor events, but the spatial content of these signals is difficult to assess during...

Full description

Bibliographic Details
Main Authors: Amirsaman Sajad, Morteza Sadeh, John Douglas Crawford
Format: Article
Language:English
Published: Wiley 2020-08-01
Series:Physiological Reports
Online Access:https://doi.org/10.14814/phy2.14533
_version_ 1819160573004218368
author Amirsaman Sajad
Morteza Sadeh
John Douglas Crawford
author_facet Amirsaman Sajad
Morteza Sadeh
John Douglas Crawford
author_sort Amirsaman Sajad
collection DOAJ
description Abstract Sensorimotor transformations require spatiotemporal coordination of signals, that is, through both time and space. For example, the gaze control system employs signals that are time‐locked to various sensorimotor events, but the spatial content of these signals is difficult to assess during ordinary gaze shifts. In this review, we describe the various models and methods that have been devised to test this question, and their limitations. We then describe a new method that can (a) simultaneously test between all of these models during natural, head‐unrestrained conditions, and (b) track the evolving spatial continuum from target (T) to future gaze coding (G, including errors) through time. We then summarize some applications of this technique, comparing spatiotemporal coding in the primate frontal eye field (FEF) and superior colliculus (SC). The results confirm that these areas preferentially encode eye‐centered, effector‐independent parameters, and show—for the first time in ordinary gaze shifts—a spatial transformation between visual and motor responses from T to G coding. We introduce a new set of spatial models (T‐G continuum) that revealed task‐dependent timing of this transformation: progressive during a memory delay between vision and action, and almost immediate without such a delay. We synthesize the results from our studies and supplement it with previous knowledge of anatomy and physiology to propose a conceptual model where cumulative transformation noise is realized as inaccuracies in gaze behavior. We conclude that the spatiotemporal transformation for gaze is both local (observed within and across neurons in a given area) and distributed (with common signals shared across remote but interconnected structures).
first_indexed 2024-12-22T16:58:35Z
format Article
id doaj.art-2e375a56ed544bce87fe292c59a34bab
institution Directory Open Access Journal
issn 2051-817X
language English
last_indexed 2024-12-22T16:58:35Z
publishDate 2020-08-01
publisher Wiley
record_format Article
series Physiological Reports
spelling doaj.art-2e375a56ed544bce87fe292c59a34bab2022-12-21T18:19:23ZengWileyPhysiological Reports2051-817X2020-08-01816n/an/a10.14814/phy2.14533Spatiotemporal transformations for gaze controlAmirsaman Sajad0Morteza Sadeh1John Douglas Crawford2Centre for Vision Research York University Toronto ON CanadaCentre for Vision Research York University Toronto ON CanadaCentre for Vision Research York University Toronto ON CanadaAbstract Sensorimotor transformations require spatiotemporal coordination of signals, that is, through both time and space. For example, the gaze control system employs signals that are time‐locked to various sensorimotor events, but the spatial content of these signals is difficult to assess during ordinary gaze shifts. In this review, we describe the various models and methods that have been devised to test this question, and their limitations. We then describe a new method that can (a) simultaneously test between all of these models during natural, head‐unrestrained conditions, and (b) track the evolving spatial continuum from target (T) to future gaze coding (G, including errors) through time. We then summarize some applications of this technique, comparing spatiotemporal coding in the primate frontal eye field (FEF) and superior colliculus (SC). The results confirm that these areas preferentially encode eye‐centered, effector‐independent parameters, and show—for the first time in ordinary gaze shifts—a spatial transformation between visual and motor responses from T to G coding. We introduce a new set of spatial models (T‐G continuum) that revealed task‐dependent timing of this transformation: progressive during a memory delay between vision and action, and almost immediate without such a delay. We synthesize the results from our studies and supplement it with previous knowledge of anatomy and physiology to propose a conceptual model where cumulative transformation noise is realized as inaccuracies in gaze behavior. We conclude that the spatiotemporal transformation for gaze is both local (observed within and across neurons in a given area) and distributed (with common signals shared across remote but interconnected structures).https://doi.org/10.14814/phy2.14533
spellingShingle Amirsaman Sajad
Morteza Sadeh
John Douglas Crawford
Spatiotemporal transformations for gaze control
Physiological Reports
title Spatiotemporal transformations for gaze control
title_full Spatiotemporal transformations for gaze control
title_fullStr Spatiotemporal transformations for gaze control
title_full_unstemmed Spatiotemporal transformations for gaze control
title_short Spatiotemporal transformations for gaze control
title_sort spatiotemporal transformations for gaze control
url https://doi.org/10.14814/phy2.14533
work_keys_str_mv AT amirsamansajad spatiotemporaltransformationsforgazecontrol
AT mortezasadeh spatiotemporaltransformationsforgazecontrol
AT johndouglascrawford spatiotemporaltransformationsforgazecontrol