Abstract
Spoken word recognition is characterized by multiple activation of sound patterns that are consistent with the acoustic-phonetic input. Recently, an extreme form of multiple activation was observed: Bilingual listeners activated words from both languages that were consistent with the input. We explored the degree to which bilingual multiple activation may be constrained by fine-grained acoustic-phonetic information. In a head-mounted eyetracking experiment, we presented Spanish-English bilinguals with spoken Spanish words having word-initial stop consonants with either English- or Spanish-appropriate voice onset times. Participants fixated interlingual distractors (nontarget pictures whose English names shared a phonological similarity with the Spanish targets) more frequently than control distractors when the target words contained English-appropriate voice onset times. These results demonstrate that fine-grained acoustic-phonetic information and a precise match between input and representation are critical for parallel activation of two languages.
Get full access to this article
View all access options for this article.
