Using Computational Models to Test Syntactic Learnability
<jats:title>Abstract</jats:title> <jats:p>We study the learnability of English filler–gap dependencies and the “island” constraints on them by assessing the generalizations made by autoregressive (incremental) language models that use deep learning to predict the ne...
Main Authors: | Wilcox, Ethan Gotlieb, Futrell, Richard, Levy, Roger |
---|---|
Other Authors: | Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences |
Format: | Article |
Language: | English |
Published: |
MIT Press
2023
|
Online Access: | https://hdl.handle.net/1721.1/150009 |
Similar Items
-
Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models
by: Wilcox, Ethan, et al.
Published: (2021) -
Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic Generalization
by: Thrush, Tristan, et al.
Published: (2021) -
Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations
by: Wilcox, Ethan, et al.
Published: (2021) -
What do RNN Language Models Learn about Filler–Gap Dependencies?
by: Wilcox, Ethan, et al.
Published: (2021) -
Structural Supervision Improves Learning of Non-Local Grammatical Dependencies
by: Wilcox, Ethan, et al.
Published: (2022)