Recognizing Speech with Large Language Models

Recent work has shown that large language models can be made to parse the contents of non-text embeddings and use those contents to perform various tasks. However, work focusing on audio inputs to large language models has thus far focused on either training a joint audio-text model from scratch on...

Full description

Bibliographic Details
Main Author: Zeitoun, Abbas
Other Authors: Kim, Yoon
Format: Thesis
Published: Massachusetts Institute of Technology 2023
Online Access:https://hdl.handle.net/1721.1/151573
Description
Summary:Recent work has shown that large language models can be made to parse the contents of non-text embeddings and use those contents to perform various tasks. However, work focusing on audio inputs to large language models has thus far focused on either training a joint audio-text model from scratch on a lot of data or on training the model to perform surface-level audio-text classification tasks. In this work, we show that a pretrained T5 encoder-decoder language model fine-tuned on as little as 10 hours of speech data can transcribe the contents of input audio embeddings and even outperforms a specialized baseline speech-to-text model at transcribing more difficult speech utterances. The resulting model serves as a first step towards language models that can manipulate audio inputs just as well as text inputs and can leverage the additional information in audio inputs to perform tasks that are not possible with text inputs alone.