Cause, Composition, and Structure in Language

From everyday communication to exploring new thoughts through writing, humans use language in a remarkably flexible, robust, and creative way. In this thesis, I present three case studies supporting the overarching hypothesis that linguistic knowledge in the human mind can be understood as hierarchi...

Full description

Bibliographic Details
Main Author: Qian, Peng
Other Authors: Levy, Roger P.
Format: Thesis
Published: Massachusetts Institute of Technology 2022
Online Access:https://hdl.handle.net/1721.1/145598
https://orcid.org/ 0000-0002-6916-3057
Description
Summary:From everyday communication to exploring new thoughts through writing, humans use language in a remarkably flexible, robust, and creative way. In this thesis, I present three case studies supporting the overarching hypothesis that linguistic knowledge in the human mind can be understood as hierarchically-structured causal generative models, within which a repertoire of compositional inference motifs support efficient inference. I begin with a targeted case study showing how native speakers follow principles of noisy-channel inference in resolving subject-verb agreement mismatches such as "The gift for the kids are hidden under the bed". Results suggest that native-speakers' inferences reflect both prior expectations and structure-sensitive conditioning of error probabilities consistent with the statistics of the language production environment. Second, I develop a more open-ended inferential challenge, completing fragmentary linguistic inputs such as "____ published won ____." into well-formed sentences. I use large-scale neural language models to compare two classes of models on this task: the task-specific fine-tuning approach standard in AI and NLP, versus an inferential approach involving composition of two simple computational motifs; the inferential approach yields more human-like completions. Third, I show that incorporating hierarchical linguistic structure into one of these computational motifs, namely the auto-regressive word prediction task, yields improvements in neural language model performance on targeted evaluations of models’ grammatical capabilities. I conclude by suggesting future directions in understanding the form and content of these causal generative models of human language.