Binocular Fusion and Invariant Category Learning due to Predictive Remapping during Scanning of a Depthful Scene with Eye Movements

How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these br...

Full description

Bibliographic Details
Main Authors: Stephen eGrossberg, Karthik eSrinivasan, Arash eYazdanbakhsh
Format: Article
Language:English
Published: Frontiers Media S.A. 2015-01-01
Series:Frontiers in Psychology
Subjects:
Online Access:http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.01457/full
_version_ 1828413550941437952
author Stephen eGrossberg
Karthik eSrinivasan
Arash eYazdanbakhsh
author_facet Stephen eGrossberg
Karthik eSrinivasan
Arash eYazdanbakhsh
author_sort Stephen eGrossberg
collection DOAJ
description How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object’s surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
first_indexed 2024-12-10T13:11:50Z
format Article
id doaj.art-541cccdaaa8a43a6b44475d590f01791
institution Directory Open Access Journal
issn 1664-1078
language English
last_indexed 2024-12-10T13:11:50Z
publishDate 2015-01-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Psychology
spelling doaj.art-541cccdaaa8a43a6b44475d590f017912022-12-22T01:47:39ZengFrontiers Media S.A.Frontiers in Psychology1664-10782015-01-01510.3389/fpsyg.2014.01457104194Binocular Fusion and Invariant Category Learning due to Predictive Remapping during Scanning of a Depthful Scene with Eye MovementsStephen eGrossberg0Karthik eSrinivasan1Arash eYazdanbakhsh2Boston UniversityBoston UniversityBoston UniversityHow does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object’s surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.01457/fullDepth Perceptionspatial attentionobject recognitioncategory learningSaccadic eye movementsstereopsis
spellingShingle Stephen eGrossberg
Karthik eSrinivasan
Arash eYazdanbakhsh
Binocular Fusion and Invariant Category Learning due to Predictive Remapping during Scanning of a Depthful Scene with Eye Movements
Frontiers in Psychology
Depth Perception
spatial attention
object recognition
category learning
Saccadic eye movements
stereopsis
title Binocular Fusion and Invariant Category Learning due to Predictive Remapping during Scanning of a Depthful Scene with Eye Movements
title_full Binocular Fusion and Invariant Category Learning due to Predictive Remapping during Scanning of a Depthful Scene with Eye Movements
title_fullStr Binocular Fusion and Invariant Category Learning due to Predictive Remapping during Scanning of a Depthful Scene with Eye Movements
title_full_unstemmed Binocular Fusion and Invariant Category Learning due to Predictive Remapping during Scanning of a Depthful Scene with Eye Movements
title_short Binocular Fusion and Invariant Category Learning due to Predictive Remapping during Scanning of a Depthful Scene with Eye Movements
title_sort binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements
topic Depth Perception
spatial attention
object recognition
category learning
Saccadic eye movements
stereopsis
url http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.01457/full
work_keys_str_mv AT stephenegrossberg binocularfusionandinvariantcategorylearningduetopredictiveremappingduringscanningofadepthfulscenewitheyemovements
AT karthikesrinivasan binocularfusionandinvariantcategorylearningduetopredictiveremappingduringscanningofadepthfulscenewitheyemovements
AT arasheyazdanbakhsh binocularfusionandinvariantcategorylearningduetopredictiveremappingduringscanningofadepthfulscenewitheyemovements