Modeling cross-modal interactions in early word learning

Infancy research demonstrating a facilitation of visual category formation in the presence of verbal labels suggests that infants' object categories and words develop interactively. This contrasts with the notion that words are simply mapped "onto" previously existing categories. To i...

Full description

Bibliographic Details
Main Authors: Althaus, N, Mareschal, D
Format: Journal article
Language:English
Published: Institute of Electrical and Electronics Engineers 2013
_version_ 1826260290090565632
author Althaus, N
Mareschal, D
author_facet Althaus, N
Mareschal, D
author_sort Althaus, N
collection OXFORD
description Infancy research demonstrating a facilitation of visual category formation in the presence of verbal labels suggests that infants' object categories and words develop interactively. This contrasts with the notion that words are simply mapped "onto" previously existing categories. To investigate the computational foundations of a system in which word and object categories develop simultaneously and in an interactive fashion, we present a model of word learning based on interacting self-organizing maps that represent the auditory and visual modalities, respectively. While other models of lexical development have employed similar dual-map architectures, our model uses active Hebbian connections to propagate activation between the visual and auditory maps during learning. Our results show that categorical perception emerges from these early audio-visual interactions in both domains.We argue that the learning mechanism introduced in our model could play a role in the facilitation of infants' categorization through verbal labeling.
first_indexed 2024-03-06T19:03:16Z
format Journal article
id oxford-uuid:14433247-df75-4de8-912e-22336199c864
institution University of Oxford
language English
last_indexed 2024-03-06T19:03:16Z
publishDate 2013
publisher Institute of Electrical and Electronics Engineers
record_format dspace
spelling oxford-uuid:14433247-df75-4de8-912e-22336199c8642022-03-26T10:18:44ZModeling cross-modal interactions in early word learningJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:14433247-df75-4de8-912e-22336199c864EnglishSymplectic Elements at OxfordInstitute of Electrical and Electronics Engineers2013Althaus, NMareschal, DInfancy research demonstrating a facilitation of visual category formation in the presence of verbal labels suggests that infants' object categories and words develop interactively. This contrasts with the notion that words are simply mapped "onto" previously existing categories. To investigate the computational foundations of a system in which word and object categories develop simultaneously and in an interactive fashion, we present a model of word learning based on interacting self-organizing maps that represent the auditory and visual modalities, respectively. While other models of lexical development have employed similar dual-map architectures, our model uses active Hebbian connections to propagate activation between the visual and auditory maps during learning. Our results show that categorical perception emerges from these early audio-visual interactions in both domains.We argue that the learning mechanism introduced in our model could play a role in the facilitation of infants' categorization through verbal labeling.
spellingShingle Althaus, N
Mareschal, D
Modeling cross-modal interactions in early word learning
title Modeling cross-modal interactions in early word learning
title_full Modeling cross-modal interactions in early word learning
title_fullStr Modeling cross-modal interactions in early word learning
title_full_unstemmed Modeling cross-modal interactions in early word learning
title_short Modeling cross-modal interactions in early word learning
title_sort modeling cross modal interactions in early word learning
work_keys_str_mv AT althausn modelingcrossmodalinteractionsinearlywordlearning
AT mareschald modelingcrossmodalinteractionsinearlywordlearning