Neural language models and human linguistic knowledge
Language is one of the hallmarks of intelligence, demanding explanation in a theory of human cognition. However, language presents unique practical challenges for quantitative empirical research, making many linguistic theories difficult to test at naturalistic scales. Artificial neural network lang...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2023
|
Online Access: | https://hdl.handle.net/1721.1/152578 |
_version_ | 1826193421820231680 |
---|---|
author | Hu, Jennifer |
author2 | Levy, Roger P. |
author_facet | Levy, Roger P. Hu, Jennifer |
author_sort | Hu, Jennifer |
collection | MIT |
description | Language is one of the hallmarks of intelligence, demanding explanation in a theory of human cognition. However, language presents unique practical challenges for quantitative empirical research, making many linguistic theories difficult to test at naturalistic scales. Artificial neural network language models (LMs) provide a new tool for studying language with mathematical precision and control, as they exhibit remarkably sophisticated linguistic behaviors while being fully intervenable. While LMs differ from humans in many ways, the learning outcomes of these models can reveal the behaviors that may emerge through expressive statistical learning algorithms applied to linguistic input.
In this thesis, I demonstrate this approach through three case studies using LMs to investigate open questions in language acquisition and comprehension. First, I use LMs to perform controlled manipulations of language learning, and find that syntactic generalizations depend more on a learner's inductive bias than on training data size. Second, I use LMs to explain systematic variation in scalar inferences by approximating human listeners' expectations over unspoken alternative sentences (e.g., "The bill was supported overwhelmingly" implies that the bill was not supported unanimously). Finally, I show that LMs and humans exhibit similar behaviors on a set of non-literal comprehension tasks which are hypothesized to require social reasoning (e.g., inferring a speaker's intended meaning from ironic statements). These findings suggest that certain aspects of linguistic knowledge could emerge through domain-general prediction mechanisms, while other aspects may require specific inductive biases and conceptual structures. |
first_indexed | 2024-09-23T09:39:21Z |
format | Thesis |
id | mit-1721.1/152578 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T09:39:21Z |
publishDate | 2023 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/1525782023-11-01T03:55:16Z Neural language models and human linguistic knowledge Hu, Jennifer Levy, Roger P. Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences Language is one of the hallmarks of intelligence, demanding explanation in a theory of human cognition. However, language presents unique practical challenges for quantitative empirical research, making many linguistic theories difficult to test at naturalistic scales. Artificial neural network language models (LMs) provide a new tool for studying language with mathematical precision and control, as they exhibit remarkably sophisticated linguistic behaviors while being fully intervenable. While LMs differ from humans in many ways, the learning outcomes of these models can reveal the behaviors that may emerge through expressive statistical learning algorithms applied to linguistic input. In this thesis, I demonstrate this approach through three case studies using LMs to investigate open questions in language acquisition and comprehension. First, I use LMs to perform controlled manipulations of language learning, and find that syntactic generalizations depend more on a learner's inductive bias than on training data size. Second, I use LMs to explain systematic variation in scalar inferences by approximating human listeners' expectations over unspoken alternative sentences (e.g., "The bill was supported overwhelmingly" implies that the bill was not supported unanimously). Finally, I show that LMs and humans exhibit similar behaviors on a set of non-literal comprehension tasks which are hypothesized to require social reasoning (e.g., inferring a speaker's intended meaning from ironic statements). These findings suggest that certain aspects of linguistic knowledge could emerge through domain-general prediction mechanisms, while other aspects may require specific inductive biases and conceptual structures. Ph.D. 2023-10-30T20:04:19Z 2023-10-30T20:04:19Z 2023-06 2023-10-17T14:43:30.272Z Thesis https://hdl.handle.net/1721.1/152578 Attribution 4.0 International (CC BY 4.0) Copyright retained by author(s) https://creativecommons.org/licenses/by/4.0/ application/pdf Massachusetts Institute of Technology |
spellingShingle | Hu, Jennifer Neural language models and human linguistic knowledge |
title | Neural language models and human linguistic knowledge |
title_full | Neural language models and human linguistic knowledge |
title_fullStr | Neural language models and human linguistic knowledge |
title_full_unstemmed | Neural language models and human linguistic knowledge |
title_short | Neural language models and human linguistic knowledge |
title_sort | neural language models and human linguistic knowledge |
url | https://hdl.handle.net/1721.1/152578 |
work_keys_str_mv | AT hujennifer neurallanguagemodelsandhumanlinguisticknowledge |