Probing Language Models for Contextual ScaleUnderstanding

Pretrained language models (LMs) have demonstrated a remarkable ability to emit linguistic and factual knowledge in certain fields. Additionally, they seem to encode relational information about different concepts in a knowledge base. However, since they are trained solely on textual corpora, it is...

Full description

Bibliographic Details
Main Author: Vedantam, Saaketh
Other Authors: Kim, Yoon
Format: Thesis
Published: Massachusetts Institute of Technology 2023
Online Access:https://hdl.handle.net/1721.1/151617