Not wacky vs. definitely wacky: a study of scalar adverbs in pretrained language models
Vector-space models of word meaning all assume that words occurring in similar contexts have similar meanings. Words that are similar in their topical associations but differ in their logical force tend to emerge as semantically close – creating well-known challenges for NLP applications that involv...
Main Authors: | , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Association for Computational Linguistics
2023
|
_version_ | 1797112063631294464 |
---|---|
author | Lorge, I Pierrehumbert, J|B |
author_facet | Lorge, I Pierrehumbert, J|B |
author_sort | Lorge, I |
collection | OXFORD |
description | Vector-space models of word meaning all assume that words occurring in similar contexts have similar meanings. Words that are similar in their topical associations but differ in their logical force tend to emerge as semantically close – creating well-known challenges for NLP applications that involve logical reasoning. Pretrained language models such as BERT, RoBERTa, GPT-2, and GPT-3 hold the promise of performing better on logical tasks than classic static word embeddings. However, reports are mixed about their success. Here, we advance this discussion through a systematic study of scalar adverbs, an under-explored class of words with strong logical force. Using three different tasks involving both naturalistic social media data and constructed examples, we investigate the extent to which BERT, RoBERTa, GPT-2 and GPT-3 exhibit knowledge of these common words. We ask: 1) Do the models distinguish amongst the three semantic categories of MODALITY, FREQUENCY and DEGREE? 2) Do they have implicit representations of full scales from maximally negative to maximally positive? 3) How do word frequency and contextual factors impact model performance? We find that despite capturing some aspects of logical meaning, the models still have obvious shortfalls. |
first_indexed | 2024-03-07T08:19:09Z |
format | Conference item |
id | oxford-uuid:bc133172-194e-4026-8efd-beb86612a4fb |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-07T08:19:09Z |
publishDate | 2023 |
publisher | Association for Computational Linguistics |
record_format | dspace |
spelling | oxford-uuid:bc133172-194e-4026-8efd-beb86612a4fb2024-01-16T11:54:04ZNot wacky vs. definitely wacky: a study of scalar adverbs in pretrained language modelsConference itemhttp://purl.org/coar/resource_type/c_5794uuid:bc133172-194e-4026-8efd-beb86612a4fbEnglishSymplectic ElementsAssociation for Computational Linguistics2023Lorge, IPierrehumbert, J|BVector-space models of word meaning all assume that words occurring in similar contexts have similar meanings. Words that are similar in their topical associations but differ in their logical force tend to emerge as semantically close – creating well-known challenges for NLP applications that involve logical reasoning. Pretrained language models such as BERT, RoBERTa, GPT-2, and GPT-3 hold the promise of performing better on logical tasks than classic static word embeddings. However, reports are mixed about their success. Here, we advance this discussion through a systematic study of scalar adverbs, an under-explored class of words with strong logical force. Using three different tasks involving both naturalistic social media data and constructed examples, we investigate the extent to which BERT, RoBERTa, GPT-2 and GPT-3 exhibit knowledge of these common words. We ask: 1) Do the models distinguish amongst the three semantic categories of MODALITY, FREQUENCY and DEGREE? 2) Do they have implicit representations of full scales from maximally negative to maximally positive? 3) How do word frequency and contextual factors impact model performance? We find that despite capturing some aspects of logical meaning, the models still have obvious shortfalls. |
spellingShingle | Lorge, I Pierrehumbert, J|B Not wacky vs. definitely wacky: a study of scalar adverbs in pretrained language models |
title | Not wacky vs. definitely wacky: a study of scalar adverbs in pretrained language models |
title_full | Not wacky vs. definitely wacky: a study of scalar adverbs in pretrained language models |
title_fullStr | Not wacky vs. definitely wacky: a study of scalar adverbs in pretrained language models |
title_full_unstemmed | Not wacky vs. definitely wacky: a study of scalar adverbs in pretrained language models |
title_short | Not wacky vs. definitely wacky: a study of scalar adverbs in pretrained language models |
title_sort | not wacky vs definitely wacky a study of scalar adverbs in pretrained language models |
work_keys_str_mv | AT lorgei notwackyvsdefinitelywackyastudyofscalaradverbsinpretrainedlanguagemodels AT pierrehumbertjb notwackyvsdefinitelywackyastudyofscalaradverbsinpretrainedlanguagemodels |