Crossmodal binding: evaluating the "unity assumption" using audiovisual speech stimuli.

We investigated whether the "unity assumption," according to which an observer assumes that two different sensory signals refer to the same underlying multisensory event, influences the multisensory integration of audiovisual speech stimuli. Syllables (Experiments 1, 3, and 4) or words (Ex...

Full description

Bibliographic Details
Main Authors: Vatakis, A, Spence, C
Format: Journal article
Language:English
Published: 2007
_version_ 1826306172792078336
author Vatakis, A
Spence, C
author_facet Vatakis, A
Spence, C
author_sort Vatakis, A
collection OXFORD
description We investigated whether the "unity assumption," according to which an observer assumes that two different sensory signals refer to the same underlying multisensory event, influences the multisensory integration of audiovisual speech stimuli. Syllables (Experiments 1, 3, and 4) or words (Experiment 2) were presented to participants at a range of different stimulus onset asynchronies using the method of constant stimuli. Participants made unspeeded temporal order judgments regarding which stream (either auditory or visual) had been presented first. The auditory and visual speech stimuli in Experiments 1-3 were either gender matched (i.e., a female face presented together with a female voice) or else gender mismatched (i.e., a female face presented together with a male voice). In Experiment 4, different utterances from the same female speaker were used to generate the matched and mismatched speech video clips. Measuring in terms of the just noticeable difference the participants in all four experiments found it easier to judge which sensory modality had been presented first when evaluating mismatched stimuli than when evaluating the matched-speech stimuli. These results therefore provide the first empirical support for the "unity assumption" in the domain of the multisensory temporal integration of audiovisual speech stimuli.
first_indexed 2024-03-07T06:43:55Z
format Journal article
id oxford-uuid:fa3b23c5-537c-4ed7-9c71-9bbe81d9d34d
institution University of Oxford
language English
last_indexed 2024-03-07T06:43:55Z
publishDate 2007
record_format dspace
spelling oxford-uuid:fa3b23c5-537c-4ed7-9c71-9bbe81d9d34d2022-03-27T13:04:04ZCrossmodal binding: evaluating the "unity assumption" using audiovisual speech stimuli.Journal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:fa3b23c5-537c-4ed7-9c71-9bbe81d9d34dEnglishSymplectic Elements at Oxford2007Vatakis, ASpence, CWe investigated whether the "unity assumption," according to which an observer assumes that two different sensory signals refer to the same underlying multisensory event, influences the multisensory integration of audiovisual speech stimuli. Syllables (Experiments 1, 3, and 4) or words (Experiment 2) were presented to participants at a range of different stimulus onset asynchronies using the method of constant stimuli. Participants made unspeeded temporal order judgments regarding which stream (either auditory or visual) had been presented first. The auditory and visual speech stimuli in Experiments 1-3 were either gender matched (i.e., a female face presented together with a female voice) or else gender mismatched (i.e., a female face presented together with a male voice). In Experiment 4, different utterances from the same female speaker were used to generate the matched and mismatched speech video clips. Measuring in terms of the just noticeable difference the participants in all four experiments found it easier to judge which sensory modality had been presented first when evaluating mismatched stimuli than when evaluating the matched-speech stimuli. These results therefore provide the first empirical support for the "unity assumption" in the domain of the multisensory temporal integration of audiovisual speech stimuli.
spellingShingle Vatakis, A
Spence, C
Crossmodal binding: evaluating the "unity assumption" using audiovisual speech stimuli.
title Crossmodal binding: evaluating the "unity assumption" using audiovisual speech stimuli.
title_full Crossmodal binding: evaluating the "unity assumption" using audiovisual speech stimuli.
title_fullStr Crossmodal binding: evaluating the "unity assumption" using audiovisual speech stimuli.
title_full_unstemmed Crossmodal binding: evaluating the "unity assumption" using audiovisual speech stimuli.
title_short Crossmodal binding: evaluating the "unity assumption" using audiovisual speech stimuli.
title_sort crossmodal binding evaluating the unity assumption using audiovisual speech stimuli
work_keys_str_mv AT vatakisa crossmodalbindingevaluatingtheunityassumptionusingaudiovisualspeechstimuli
AT spencec crossmodalbindingevaluatingtheunityassumptionusingaudiovisualspeechstimuli