Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing

The recent success of deep learning neural language models such as Bidirectional Encoder Representations from Transformers (BERT) has brought innovations to computational language research. The present study explores the possibility of using a language model in investigating human language processes...

Full description

Bibliographic Details
Main Authors: Unsub Shin, Eunkyung Yi, Sanghoun Song
Format: Article
Language:English
Published: Frontiers Media S.A. 2023-02-01
Series:Frontiers in Psychology
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fpsyg.2023.937656/full
_version_ 1797897116941549568
author Unsub Shin
Eunkyung Yi
Sanghoun Song
author_facet Unsub Shin
Eunkyung Yi
Sanghoun Song
author_sort Unsub Shin
collection DOAJ
description The recent success of deep learning neural language models such as Bidirectional Encoder Representations from Transformers (BERT) has brought innovations to computational language research. The present study explores the possibility of using a language model in investigating human language processes, based on the case study of negative polarity items (NPIs). We first conducted an experiment with BERT to examine whether the model successfully captures the hierarchical structural relationship between an NPI and its licensor and whether it may lead to an error analogous to the grammatical illusion shown in the psycholinguistic experiment (Experiment 1). We also investigated whether the language model can capture the fine-grained semantic properties of NPI licensors and discriminate their subtle differences on the scale of licensing strengths (Experiment 2). The results of the two experiments suggest that overall, the neural language model is highly sensitive to both syntactic and semantic constraints in NPI processing. The model’s processing patterns and sensitivities are shown to be very close to humans, suggesting their role as a research tool or object in the study of language.
first_indexed 2024-04-10T07:52:50Z
format Article
id doaj.art-302b75baf0394cf1abaf1332811156f0
institution Directory Open Access Journal
issn 1664-1078
language English
last_indexed 2024-04-10T07:52:50Z
publishDate 2023-02-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Psychology
spelling doaj.art-302b75baf0394cf1abaf1332811156f02023-02-23T07:33:35ZengFrontiers Media S.A.Frontiers in Psychology1664-10782023-02-011410.3389/fpsyg.2023.937656937656Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensingUnsub Shin0Eunkyung Yi1Sanghoun Song2Department of Linguistics, Korea University, Seoul, Republic of KoreaDepartment of English Education, Ewha Womans University, Seoul, Republic of KoreaDepartment of Linguistics, Korea University, Seoul, Republic of KoreaThe recent success of deep learning neural language models such as Bidirectional Encoder Representations from Transformers (BERT) has brought innovations to computational language research. The present study explores the possibility of using a language model in investigating human language processes, based on the case study of negative polarity items (NPIs). We first conducted an experiment with BERT to examine whether the model successfully captures the hierarchical structural relationship between an NPI and its licensor and whether it may lead to an error analogous to the grammatical illusion shown in the psycholinguistic experiment (Experiment 1). We also investigated whether the language model can capture the fine-grained semantic properties of NPI licensors and discriminate their subtle differences on the scale of licensing strengths (Experiment 2). The results of the two experiments suggest that overall, the neural language model is highly sensitive to both syntactic and semantic constraints in NPI processing. The model’s processing patterns and sensitivities are shown to be very close to humans, suggesting their role as a research tool or object in the study of language.https://www.frontiersin.org/articles/10.3389/fpsyg.2023.937656/fullneural language modelBERTnegative polarity itemsNPI licensinggrammatical illusionlicensing strength
spellingShingle Unsub Shin
Eunkyung Yi
Sanghoun Song
Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing
Frontiers in Psychology
neural language model
BERT
negative polarity items
NPI licensing
grammatical illusion
licensing strength
title Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing
title_full Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing
title_fullStr Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing
title_full_unstemmed Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing
title_short Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing
title_sort investigating a neural language model s replicability of psycholinguistic experiments a case study of npi licensing
topic neural language model
BERT
negative polarity items
NPI licensing
grammatical illusion
licensing strength
url https://www.frontiersin.org/articles/10.3389/fpsyg.2023.937656/full
work_keys_str_mv AT unsubshin investigatinganeurallanguagemodelsreplicabilityofpsycholinguisticexperimentsacasestudyofnpilicensing
AT eunkyungyi investigatinganeurallanguagemodelsreplicabilityofpsycholinguisticexperimentsacasestudyofnpilicensing
AT sanghounsong investigatinganeurallanguagemodelsreplicabilityofpsycholinguisticexperimentsacasestudyofnpilicensing