Not wacky vs. definitely wacky: a study of scalar adverbs in pretrained language models
Vector-space models of word meaning all assume that words occurring in similar contexts have similar meanings. Words that are similar in their topical associations but differ in their logical force tend to emerge as semantically close – creating well-known challenges for NLP applications that involv...
Main Authors: | Lorge, I, Pierrehumbert, J|B |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Association for Computational Linguistics
2023
|
Similar Items
-
Geographic adaptation of pretrained language models
by: Hofmann, V, et al.
Published: (2024) -
DagoBERT: generating derivational morphology with a pretrained language model
by: Hofmann, V, et al.
Published: (2020) -
An embarrassingly simple method to mitigate undesirable properties of pretrained language model tokenizers
by: Hofmann, V, et al.
Published: (2022) -
Probing large language models for scalar adjective lexical semantics and scalar diversity pragmatics
by: Lin, F, et al.
Published: (2024) -
Survey of Applications of Pretrained Language Models
by: SUN Kaili, LUO Xudong , Michael Y.LUO
Published: (2023-01-01)