Language acquisition and probabilistic models: Keeping it simple

Hierarchical Bayesian Models (HBMs) have been used with some success to capture empirically observed patterns of under- and overgeneralization in child language acquisition. However, as is well known, HBMs are "ideal" learning systems, assuming access to unlimited computational resources t...

Full description

Bibliographic Details
Format: Article
Language:English
Published: 2021
Online Access:https://hdl.handle.net/1721.1/132155
_version_ 1811082788228562944
collection MIT
description Hierarchical Bayesian Models (HBMs) have been used with some success to capture empirically observed patterns of under- and overgeneralization in child language acquisition. However, as is well known, HBMs are "ideal" learning systems, assuming access to unlimited computational resources that may not be available to child language learners. Consequently, it remains crucial to carefully assess the use of HBMs along with alternative, possibly simpler, candidate models. This paper presents such an evaluation for a language acquisition domain where explicit HBMshave been proposed: the acquisition of English dative constructions. In particular, we present a detailed, empiricallygrounded model-selection comparison of HBMs vs. a simpler alternative based on clustering along with maximum likelihood estimation that we call linear competition learning (LCL). Our results demonstrate that LCL can match HBM model performance without incurring on the high computational costs associated with HBMs. © 2013 Association for Computational Linguistics.
first_indexed 2024-09-23T12:09:00Z
format Article
id mit-1721.1/132155
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T12:09:00Z
publishDate 2021
record_format dspace
spelling mit-1721.1/1321552021-09-21T03:37:01Z Language acquisition and probabilistic models: Keeping it simple Hierarchical Bayesian Models (HBMs) have been used with some success to capture empirically observed patterns of under- and overgeneralization in child language acquisition. However, as is well known, HBMs are "ideal" learning systems, assuming access to unlimited computational resources that may not be available to child language learners. Consequently, it remains crucial to carefully assess the use of HBMs along with alternative, possibly simpler, candidate models. This paper presents such an evaluation for a language acquisition domain where explicit HBMshave been proposed: the acquisition of English dative constructions. In particular, we present a detailed, empiricallygrounded model-selection comparison of HBMs vs. a simpler alternative based on clustering along with maximum likelihood estimation that we call linear competition learning (LCL). Our results demonstrate that LCL can match HBM model performance without incurring on the high computational costs associated with HBMs. © 2013 Association for Computational Linguistics. 2021-09-20T18:21:11Z 2021-09-20T18:21:11Z 2019-05-09T13:29:29Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/132155 en Creative Commons Attribution 4.0 International license https://creativecommons.org/licenses/by/4.0/ application/pdf Association for Computational Linguistics (ACL)
spellingShingle Language acquisition and probabilistic models: Keeping it simple
title Language acquisition and probabilistic models: Keeping it simple
title_full Language acquisition and probabilistic models: Keeping it simple
title_fullStr Language acquisition and probabilistic models: Keeping it simple
title_full_unstemmed Language acquisition and probabilistic models: Keeping it simple
title_short Language acquisition and probabilistic models: Keeping it simple
title_sort language acquisition and probabilistic models keeping it simple
url https://hdl.handle.net/1721.1/132155