Abstract
In everyday life, children hear but also often see their caregiver talking. Children build on this correspondence to resolve auditory uncertainties and decipher words from the speech input. As they hear the name of an object, 18- to 30-month-olds form a representation that permits word recognition in either the auditory (i.e. acoustic form of the word with no accompanying face) or the visual modality (i.e. seeing a silent talking face). Continuing on this work, we ask whether this ability already exists at a younger age. Using a cross-modal word learning task, French-learning 14-month-old infants were taught novel word–object mappings. During learning, they experienced the words auditorily. At test, they experienced the words either in the auditory or the visual modality. Results revealed successful word recognition in the auditory modality only. This suggests that as opposed to older children, 14-month-old infants only interpret novel auditorily learned words auditorily. This finding is discussed in line with the perceptual and lexical achievements that may influence infants’ capacity to navigate from the auditory to the visual modality during word learning.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
