Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children
From the very first moments of their lives, infants are able to link specific movements of the visual articulators to auditory speech signals. However, recent evidence indicates that infants focus primarily on auditory speech signals when learning new words. Here, we ask whether 30-month-old childre...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2017-12-01
|
Series: | Frontiers in Psychology |
Subjects: | |
Online Access: | http://journal.frontiersin.org/article/10.3389/fpsyg.2017.02122/full |
_version_ | 1818518299739160576 |
---|---|
author | Mélanie Havy Pascal Zesiger |
author_facet | Mélanie Havy Pascal Zesiger |
author_sort | Mélanie Havy |
collection | DOAJ |
description | From the very first moments of their lives, infants are able to link specific movements of the visual articulators to auditory speech signals. However, recent evidence indicates that infants focus primarily on auditory speech signals when learning new words. Here, we ask whether 30-month-old children are able to learn new words based solely on visible speech information, and whether information from both auditory and visual modalities is available after learning in only one modality. To test this, children were taught new lexical mappings. One group of children experienced the words in the auditory modality (i.e., acoustic form of the word with no accompanying face). Another group experienced the words in the visual modality (seeing a silent talking face). Lexical recognition was tested in either the learning modality or in the other modality. Results revealed successful word learning in either modality. Results further showed cross-modal recognition following an auditory-only, but not a visual-only, experience of the words. Together, these findings suggest that visible speech becomes increasingly informative for the purpose of lexical learning, but that an auditory-only experience evokes a cross-modal representation of the words. |
first_indexed | 2024-12-11T01:08:21Z |
format | Article |
id | doaj.art-1f0ca0f57890468297ab027a613abddb |
institution | Directory Open Access Journal |
issn | 1664-1078 |
language | English |
last_indexed | 2024-12-11T01:08:21Z |
publishDate | 2017-12-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Psychology |
spelling | doaj.art-1f0ca0f57890468297ab027a613abddb2022-12-22T01:26:07ZengFrontiers Media S.A.Frontiers in Psychology1664-10782017-12-01810.3389/fpsyg.2017.02122306866Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old ChildrenMélanie HavyPascal ZesigerFrom the very first moments of their lives, infants are able to link specific movements of the visual articulators to auditory speech signals. However, recent evidence indicates that infants focus primarily on auditory speech signals when learning new words. Here, we ask whether 30-month-old children are able to learn new words based solely on visible speech information, and whether information from both auditory and visual modalities is available after learning in only one modality. To test this, children were taught new lexical mappings. One group of children experienced the words in the auditory modality (i.e., acoustic form of the word with no accompanying face). Another group experienced the words in the visual modality (seeing a silent talking face). Lexical recognition was tested in either the learning modality or in the other modality. Results revealed successful word learning in either modality. Results further showed cross-modal recognition following an auditory-only, but not a visual-only, experience of the words. Together, these findings suggest that visible speech becomes increasingly informative for the purpose of lexical learning, but that an auditory-only experience evokes a cross-modal representation of the words.http://journal.frontiersin.org/article/10.3389/fpsyg.2017.02122/fullaudio-visual speech perceptionword-learningcross-modal recognitionlexical representationchild development |
spellingShingle | Mélanie Havy Pascal Zesiger Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children Frontiers in Psychology audio-visual speech perception word-learning cross-modal recognition lexical representation child development |
title | Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children |
title_full | Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children |
title_fullStr | Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children |
title_full_unstemmed | Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children |
title_short | Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children |
title_sort | learning spoken words via the ears and eyes evidence from 30 month old children |
topic | audio-visual speech perception word-learning cross-modal recognition lexical representation child development |
url | http://journal.frontiersin.org/article/10.3389/fpsyg.2017.02122/full |
work_keys_str_mv | AT melaniehavy learningspokenwordsviatheearsandeyesevidencefrom30montholdchildren AT pascalzesiger learningspokenwordsviatheearsandeyesevidencefrom30montholdchildren |