Within- and cross-modal distance information disambiguate visual size- perception

Perception is fundamentally underconstrained because different combinations of object properties can generate the same sensory information. To disambiguate sensory information into estimates of scene properties, our brains incorporate prior knowledge and additional “auxiliary” (i.e., not directly re...

Full description

Bibliographic Details
Main Authors: Battaglia, Peter W., Kersten, Daniel, Machulla, Tonja, Schrater, Paul R., Ernst, Marc O., Di Luca, Massimiliano
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:en_US
Published: Public Library of Science 2010
Online Access:http://hdl.handle.net/1721.1/55350
https://orcid.org/0000-0002-9931-3685
_version_ 1826217777668554752
author Battaglia, Peter W.
Kersten, Daniel
Machulla, Tonja
Schrater, Paul R.
Ernst, Marc O.
Di Luca, Massimiliano
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Battaglia, Peter W.
Kersten, Daniel
Machulla, Tonja
Schrater, Paul R.
Ernst, Marc O.
Di Luca, Massimiliano
author_sort Battaglia, Peter W.
collection MIT
description Perception is fundamentally underconstrained because different combinations of object properties can generate the same sensory information. To disambiguate sensory information into estimates of scene properties, our brains incorporate prior knowledge and additional “auxiliary” (i.e., not directly relevant to desired scene property) sensory information to constrain perceptual interpretations. For example, knowing the distance to an object helps in perceiving its size. The literature contains few demonstrations of the use of prior knowledge and auxiliary information in combined visual and haptic disambiguation and almost no examination of haptic disambiguation of vision beyond “bistable” stimuli. Previous studies have reported humans integrate multiple unambiguous sensations to perceive single, continuous object properties, like size or position. Here we test whether humans use visual and haptic information, individually and jointly, to disambiguate size from distance. We presented participants with a ball moving in depth with a changing diameter. Because no unambiguous distance information is available under monocular viewing, participants rely on prior assumptions about the ball's distance to disambiguate their -size percept. Presenting auxiliary binocular and/or haptic distance information augments participants' prior distance assumptions and improves their size judgment accuracy—though binocular cues were trusted more than haptic. Our results suggest both visual and haptic distance information disambiguate size perception, and we interpret these results in the context of probabilistic perceptual reasoning.
first_indexed 2024-09-23T17:09:00Z
format Article
id mit-1721.1/55350
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T17:09:00Z
publishDate 2010
publisher Public Library of Science
record_format dspace
spelling mit-1721.1/553502022-09-29T23:59:43Z Within- and cross-modal distance information disambiguate visual size- perception Battaglia, Peter W. Kersten, Daniel Machulla, Tonja Schrater, Paul R. Ernst, Marc O. Di Luca, Massimiliano Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences Battaglia, Peter W. Battaglia, Peter W. Perception is fundamentally underconstrained because different combinations of object properties can generate the same sensory information. To disambiguate sensory information into estimates of scene properties, our brains incorporate prior knowledge and additional “auxiliary” (i.e., not directly relevant to desired scene property) sensory information to constrain perceptual interpretations. For example, knowing the distance to an object helps in perceiving its size. The literature contains few demonstrations of the use of prior knowledge and auxiliary information in combined visual and haptic disambiguation and almost no examination of haptic disambiguation of vision beyond “bistable” stimuli. Previous studies have reported humans integrate multiple unambiguous sensations to perceive single, continuous object properties, like size or position. Here we test whether humans use visual and haptic information, individually and jointly, to disambiguate size from distance. We presented participants with a ball moving in depth with a changing diameter. Because no unambiguous distance information is available under monocular viewing, participants rely on prior assumptions about the ball's distance to disambiguate their -size percept. Presenting auxiliary binocular and/or haptic distance information augments participants' prior distance assumptions and improves their size judgment accuracy—though binocular cues were trusted more than haptic. Our results suggest both visual and haptic distance information disambiguate size perception, and we interpret these results in the context of probabilistic perceptual reasoning. Max Planck Society for the Advancement of Science United States. Office of Naval Research (N 00014-07-1-0937) SFB (550-A11) EU (grant 27141 ‘‘ImmerSence’’) University of Minnesota Doctoral Dissertation Fellowship 2010-05-28T18:48:04Z 2010-05-28T18:48:04Z 2010-01 2010-03 Article http://purl.org/eprint/type/JournalArticle 1553-734X 1553-7358 http://hdl.handle.net/1721.1/55350 Battaglia PW, Di Luca M, Ernst MO, Schrater PR, Machulla T, et al. (2010) Within- and Cross-Modal Distance Information Disambiguate Visual Size-Change Perception. PLoS Comput Biol 6(3): e1000697. doi:10.1371/journal.pcbi.1000697 https://orcid.org/0000-0002-9931-3685 en_US http://dx.doi.org/10.1371/journal.pcbi.1000697 PLoS Computational Biology Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Public Library of Science PLoS
spellingShingle Battaglia, Peter W.
Kersten, Daniel
Machulla, Tonja
Schrater, Paul R.
Ernst, Marc O.
Di Luca, Massimiliano
Within- and cross-modal distance information disambiguate visual size- perception
title Within- and cross-modal distance information disambiguate visual size- perception
title_full Within- and cross-modal distance information disambiguate visual size- perception
title_fullStr Within- and cross-modal distance information disambiguate visual size- perception
title_full_unstemmed Within- and cross-modal distance information disambiguate visual size- perception
title_short Within- and cross-modal distance information disambiguate visual size- perception
title_sort within and cross modal distance information disambiguate visual size perception
url http://hdl.handle.net/1721.1/55350
https://orcid.org/0000-0002-9931-3685
work_keys_str_mv AT battagliapeterw withinandcrossmodaldistanceinformationdisambiguatevisualsizeperception
AT kerstendaniel withinandcrossmodaldistanceinformationdisambiguatevisualsizeperception
AT machullatonja withinandcrossmodaldistanceinformationdisambiguatevisualsizeperception
AT schraterpaulr withinandcrossmodaldistanceinformationdisambiguatevisualsizeperception
AT ernstmarco withinandcrossmodaldistanceinformationdisambiguatevisualsizeperception
AT dilucamassimiliano withinandcrossmodaldistanceinformationdisambiguatevisualsizeperception