S4-3: Spatial Processing of Visual Motion
Local motion signals are extracted in parallel by a bank of motion detectors, and their spatiotemporal interactions are processed in subsequent stages. In this talk, I will review our recent studies on spatial interactions in visual motion processing. First, we found two types of spatial pooling of...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
SAGE Publishing
2012-10-01
|
Series: | i-Perception |
Online Access: | https://doi.org/10.1068/if591 |
_version_ | 1818954135844683776 |
---|---|
author | Shin'ya Nishida |
author_facet | Shin'ya Nishida |
author_sort | Shin'ya Nishida |
collection | DOAJ |
description | Local motion signals are extracted in parallel by a bank of motion detectors, and their spatiotemporal interactions are processed in subsequent stages. In this talk, I will review our recent studies on spatial interactions in visual motion processing. First, we found two types of spatial pooling of local motion signals. Directionally ambiguous 1D local motion signals are pooled across orientation and space for solution of the aperture problem, while 2D local motion signals are pooled for estimation of global vector average (e.g., Amano et al., 2009 Journal of Vision 9 (3:4) 1–25). Second, when stimulus presentation is brief, coherent motion detection of dynamic random-dot kinematogram is not efficient. Nevertheless, it is significantly improved by transient and synchronous presentation of a stationary surround pattern. This suggests that centre-surround spatial interaction may help rapid perception of motion (Linares et al., submitted). Third, to know how the visual system encodes pairwise relationships between remote motion signals, we measured the temporal rate limit for perceiving the relationship of two motion directions presented at the same time at different spatial locations. Compared with similar tasks with luminance or orientation signals, motion comparison was more rapid and hence efficient. This high performance was affected little by inter-element separation even when it was increased up to 100 deg. These findings indicate the existence of specialized processes to encode long-range relationships between motion signals for quick appreciation of global dynamic scene structure (Maruya et al., in preparation). |
first_indexed | 2024-12-20T10:17:22Z |
format | Article |
id | doaj.art-10bacf7603d3425b86ef189fdf495ae6 |
institution | Directory Open Access Journal |
issn | 2041-6695 |
language | English |
last_indexed | 2024-12-20T10:17:22Z |
publishDate | 2012-10-01 |
publisher | SAGE Publishing |
record_format | Article |
series | i-Perception |
spelling | doaj.art-10bacf7603d3425b86ef189fdf495ae62022-12-21T19:44:00ZengSAGE Publishingi-Perception2041-66952012-10-01310.1068/if59110.1068_if591S4-3: Spatial Processing of Visual MotionShin'ya Nishida0NTT Communication Science Laboratories, JapanLocal motion signals are extracted in parallel by a bank of motion detectors, and their spatiotemporal interactions are processed in subsequent stages. In this talk, I will review our recent studies on spatial interactions in visual motion processing. First, we found two types of spatial pooling of local motion signals. Directionally ambiguous 1D local motion signals are pooled across orientation and space for solution of the aperture problem, while 2D local motion signals are pooled for estimation of global vector average (e.g., Amano et al., 2009 Journal of Vision 9 (3:4) 1–25). Second, when stimulus presentation is brief, coherent motion detection of dynamic random-dot kinematogram is not efficient. Nevertheless, it is significantly improved by transient and synchronous presentation of a stationary surround pattern. This suggests that centre-surround spatial interaction may help rapid perception of motion (Linares et al., submitted). Third, to know how the visual system encodes pairwise relationships between remote motion signals, we measured the temporal rate limit for perceiving the relationship of two motion directions presented at the same time at different spatial locations. Compared with similar tasks with luminance or orientation signals, motion comparison was more rapid and hence efficient. This high performance was affected little by inter-element separation even when it was increased up to 100 deg. These findings indicate the existence of specialized processes to encode long-range relationships between motion signals for quick appreciation of global dynamic scene structure (Maruya et al., in preparation).https://doi.org/10.1068/if591 |
spellingShingle | Shin'ya Nishida S4-3: Spatial Processing of Visual Motion i-Perception |
title | S4-3: Spatial Processing of Visual Motion |
title_full | S4-3: Spatial Processing of Visual Motion |
title_fullStr | S4-3: Spatial Processing of Visual Motion |
title_full_unstemmed | S4-3: Spatial Processing of Visual Motion |
title_short | S4-3: Spatial Processing of Visual Motion |
title_sort | s4 3 spatial processing of visual motion |
url | https://doi.org/10.1068/if591 |
work_keys_str_mv | AT shinyanishida s43spatialprocessingofvisualmotion |