Rapid contextualization of fragmented scene information in the human visual system

Real-world environments are extremely rich in visual information. At any given moment in time, only a fraction of this information is available to the eyes and the brain, rendering naturalistic vision a collection of incomplete snapshots. Previous research suggests that in order to successfully cont...

Full description

Bibliographic Details
Main Authors: Daniel Kaiser, Gabriele Inciuraite, Radoslaw M. Cichy
Format: Article
Language:English
Published: Elsevier 2020-10-01
Series:NeuroImage
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S1053811920305310
_version_ 1818511122858246144
author Daniel Kaiser
Gabriele Inciuraite
Radoslaw M. Cichy
author_facet Daniel Kaiser
Gabriele Inciuraite
Radoslaw M. Cichy
author_sort Daniel Kaiser
collection DOAJ
description Real-world environments are extremely rich in visual information. At any given moment in time, only a fraction of this information is available to the eyes and the brain, rendering naturalistic vision a collection of incomplete snapshots. Previous research suggests that in order to successfully contextualize this fragmented information, the visual system sorts inputs according to spatial schemata, that is knowledge about the typical composition of the visual world. Here, we used a large set of 840 different natural scene fragments to investigate whether this sorting mechanism can operate across the diverse visual environments encountered during real-world vision. We recorded brain activity using electroencephalography (EEG) while participants viewed incomplete scene fragments at fixation. Using representational similarity analysis on the EEG data, we tracked the fragments’ cortical representations across time. We found that the fragments’ typical vertical location within the environment (top or bottom) predicted their cortical representations, indexing a sorting of information according to spatial schemata. The fragments’ cortical representations were most strongly organized by their vertical location at around 200 ​ms after image onset, suggesting rapid perceptual sorting of information according to spatial schemata. In control analyses, we show that this sorting is flexible with respect to visual features: it is neither explained by commonalities between visually similar indoor and outdoor scenes, nor by the feature organization emerging from a deep neural network trained on scene categorization. Demonstrating such a flexible sorting across a wide range of visually diverse scenes suggests a contextualization mechanism suitable for complex and variable real-world environments.
first_indexed 2024-12-10T23:29:11Z
format Article
id doaj.art-b73bb3b1e54f43fd93d1655a7e41f08a
institution Directory Open Access Journal
issn 1095-9572
language English
last_indexed 2024-12-10T23:29:11Z
publishDate 2020-10-01
publisher Elsevier
record_format Article
series NeuroImage
spelling doaj.art-b73bb3b1e54f43fd93d1655a7e41f08a2022-12-22T01:29:28ZengElsevierNeuroImage1095-95722020-10-01219117045Rapid contextualization of fragmented scene information in the human visual systemDaniel Kaiser0Gabriele Inciuraite1Radoslaw M. Cichy2Department of Psychology, University of York, York, UK; Corresponding author. Department of Psychology, University of York, Heslington, York, YO10 5DD, UK.Department of Education and Psychology, Freie Universität Berlin, Berlin, GermanyDepartment of Education and Psychology, Freie Universität Berlin, Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience Berlin, Berlin, GermanyReal-world environments are extremely rich in visual information. At any given moment in time, only a fraction of this information is available to the eyes and the brain, rendering naturalistic vision a collection of incomplete snapshots. Previous research suggests that in order to successfully contextualize this fragmented information, the visual system sorts inputs according to spatial schemata, that is knowledge about the typical composition of the visual world. Here, we used a large set of 840 different natural scene fragments to investigate whether this sorting mechanism can operate across the diverse visual environments encountered during real-world vision. We recorded brain activity using electroencephalography (EEG) while participants viewed incomplete scene fragments at fixation. Using representational similarity analysis on the EEG data, we tracked the fragments’ cortical representations across time. We found that the fragments’ typical vertical location within the environment (top or bottom) predicted their cortical representations, indexing a sorting of information according to spatial schemata. The fragments’ cortical representations were most strongly organized by their vertical location at around 200 ​ms after image onset, suggesting rapid perceptual sorting of information according to spatial schemata. In control analyses, we show that this sorting is flexible with respect to visual features: it is neither explained by commonalities between visually similar indoor and outdoor scenes, nor by the feature organization emerging from a deep neural network trained on scene categorization. Demonstrating such a flexible sorting across a wide range of visually diverse scenes suggests a contextualization mechanism suitable for complex and variable real-world environments.http://www.sciencedirect.com/science/article/pii/S1053811920305310Visual perceptionScene representationSpatial schemaEEGRepresentational similarity analysisDeep neural networks
spellingShingle Daniel Kaiser
Gabriele Inciuraite
Radoslaw M. Cichy
Rapid contextualization of fragmented scene information in the human visual system
NeuroImage
Visual perception
Scene representation
Spatial schema
EEG
Representational similarity analysis
Deep neural networks
title Rapid contextualization of fragmented scene information in the human visual system
title_full Rapid contextualization of fragmented scene information in the human visual system
title_fullStr Rapid contextualization of fragmented scene information in the human visual system
title_full_unstemmed Rapid contextualization of fragmented scene information in the human visual system
title_short Rapid contextualization of fragmented scene information in the human visual system
title_sort rapid contextualization of fragmented scene information in the human visual system
topic Visual perception
Scene representation
Spatial schema
EEG
Representational similarity analysis
Deep neural networks
url http://www.sciencedirect.com/science/article/pii/S1053811920305310
work_keys_str_mv AT danielkaiser rapidcontextualizationoffragmentedsceneinformationinthehumanvisualsystem
AT gabrieleinciuraite rapidcontextualizationoffragmentedsceneinformationinthehumanvisualsystem
AT radoslawmcichy rapidcontextualizationoffragmentedsceneinformationinthehumanvisualsystem