Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.

We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First...

Full description

Bibliographic Details
Main Authors: Chen, Y, Spence, C
Format: Journal article
Language:English
Published: 2011
_version_ 1797062068070776832
author Chen, Y
Spence, C
author_facet Chen, Y
Spence, C
author_sort Chen, Y
collection OXFORD
description We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.
first_indexed 2024-03-06T20:40:15Z
format Journal article
id oxford-uuid:340d1a5b-f5a6-4d02-b0e2-c94acd0200cc
institution University of Oxford
language English
last_indexed 2024-03-06T20:40:15Z
publishDate 2011
record_format dspace
spelling oxford-uuid:340d1a5b-f5a6-4d02-b0e2-c94acd0200cc2022-03-26T13:23:37ZCrossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.Journal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:340d1a5b-f5a6-4d02-b0e2-c94acd0200ccEnglishSymplectic Elements at Oxford2011Chen, YSpence, CWe propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.
spellingShingle Chen, Y
Spence, C
Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.
title Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.
title_full Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.
title_fullStr Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.
title_full_unstemmed Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.
title_short Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.
title_sort crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity
work_keys_str_mv AT cheny crossmodalsemanticprimingbynaturalisticsoundsandspokenwordsenhancesvisualsensitivity
AT spencec crossmodalsemanticprimingbynaturalisticsoundsandspokenwordsenhancesvisualsensitivity