Abstract
An unresolved issue in speech perception concerns whether top-down linguistic information influences perceptual responses. We addressed this issue using the event-related-potential technique in two experiments that measured cross-modal sequential-semantic priming effects on the auditory N1, an index of acoustic-cue encoding. Participants heard auditory targets (e.g., “potatoes”) following associated visual primes (e.g., “MASHED”), neutral visual primes (e.g., “FACE”), or a visual mask (e.g., “XXXX”). Auditory targets began with voiced (/b/, /d/, /g/) or voiceless (/p/, /t/, /k/) stop consonants, an acoustic difference known to yield differences in N1 amplitude. In Experiment 1 (N = 21), semantic context modulated responses to upcoming targets, with smaller N1 amplitudes for semantic associates. In Experiment 2 (N = 29), semantic context changed how listeners encoded sounds: Ambiguous voice-onset times were encoded similarly to the voicing end point elicited by semantic associates. These results are consistent with an interactive model of spoken-word recognition that includes top-down effects on early perception.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
