Dissociating the timecourses of the crossmodal semantic priming effects elicited by naturalistic sounds and spoken words

The present study compared the timecourses of the crossmodal semantic priming effects elicited by naturalistic sounds and spoken words on visual picture processing. Following an auditory prime, a picture (or blank frame) was briefly presented and then immediately masked. The participants had to judg...

Full description

Bibliographic Details
Main Authors: Spence, C, Chen, Y
Format: Journal article
Published: Springer 2017
Description
Summary:The present study compared the timecourses of the crossmodal semantic priming effects elicited by naturalistic sounds and spoken words on visual picture processing. Following an auditory prime, a picture (or blank frame) was briefly presented and then immediately masked. The participants had to judge whether a picture was present or not. Naturalistic sounds consistently elicited a crossmodal semantic priming effect on visual sensitivity (d') for pictures (higher d' in the congruent than in the incongruent condition) at the 350 ms rather than at the 1000 ms stimulus onset asynchrony (SOA). Spoken words mainly elicited a crossmodal semantic priming effect at the 1000 ms rather than at the 350 ms SOA, but this effect was modulated by the order of testing these two SOAs. It would therefore appear that visual picture processing can be rapidly primed by naturalistic sounds via crossmodal associations, and this effect is short-lived. In contrast, spoken words prime visual picture processing over a wider range of prime-target intervals, though this effect was conditioned by the prior context.