Measuring and Manipulating State Representations in Neural Language Models

Modern neural language models (LMs) are typically pre-trained with a self-supervised objective: they are presented with texts that have piece(s) withheld, and asked to generate the withheld portions of the text. By simply scaling up such training, LMs have been able to achieve remarkable performance...

Full description

Bibliographic Details
Main Author: Li, Belinda Zou
Other Authors: Andreas, Jacob
Format: Thesis
Published: Massachusetts Institute of Technology 2023
Online Access:https://hdl.handle.net/1721.1/150114