Recognizing Speech with Large Language Models

Recent work has shown that large language models can be made to parse the contents of non-text embeddings and use those contents to perform various tasks. However, work focusing on audio inputs to large language models has thus far focused on either training a joint audio-text model from scratch on...

Full description

Bibliographic Details
Main Author: Zeitoun, Abbas
Other Authors: Kim, Yoon
Format: Thesis
Published: Massachusetts Institute of Technology 2023
Online Access:https://hdl.handle.net/1721.1/151573
_version_ 1826215464772042752
author Zeitoun, Abbas
author2 Kim, Yoon
author_facet Kim, Yoon
Zeitoun, Abbas
author_sort Zeitoun, Abbas
collection MIT
description Recent work has shown that large language models can be made to parse the contents of non-text embeddings and use those contents to perform various tasks. However, work focusing on audio inputs to large language models has thus far focused on either training a joint audio-text model from scratch on a lot of data or on training the model to perform surface-level audio-text classification tasks. In this work, we show that a pretrained T5 encoder-decoder language model fine-tuned on as little as 10 hours of speech data can transcribe the contents of input audio embeddings and even outperforms a specialized baseline speech-to-text model at transcribing more difficult speech utterances. The resulting model serves as a first step towards language models that can manipulate audio inputs just as well as text inputs and can leverage the additional information in audio inputs to perform tasks that are not possible with text inputs alone.
first_indexed 2024-09-23T16:30:17Z
format Thesis
id mit-1721.1/151573
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T16:30:17Z
publishDate 2023
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1515732023-08-01T03:04:18Z Recognizing Speech with Large Language Models Zeitoun, Abbas Kim, Yoon Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Recent work has shown that large language models can be made to parse the contents of non-text embeddings and use those contents to perform various tasks. However, work focusing on audio inputs to large language models has thus far focused on either training a joint audio-text model from scratch on a lot of data or on training the model to perform surface-level audio-text classification tasks. In this work, we show that a pretrained T5 encoder-decoder language model fine-tuned on as little as 10 hours of speech data can transcribe the contents of input audio embeddings and even outperforms a specialized baseline speech-to-text model at transcribing more difficult speech utterances. The resulting model serves as a first step towards language models that can manipulate audio inputs just as well as text inputs and can leverage the additional information in audio inputs to perform tasks that are not possible with text inputs alone. S.M. 2023-07-31T19:49:37Z 2023-07-31T19:49:37Z 2023-06 2023-07-13T14:31:10.958Z Thesis https://hdl.handle.net/1721.1/151573 In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology
spellingShingle Zeitoun, Abbas
Recognizing Speech with Large Language Models
title Recognizing Speech with Large Language Models
title_full Recognizing Speech with Large Language Models
title_fullStr Recognizing Speech with Large Language Models
title_full_unstemmed Recognizing Speech with Large Language Models
title_short Recognizing Speech with Large Language Models
title_sort recognizing speech with large language models
url https://hdl.handle.net/1721.1/151573
work_keys_str_mv AT zeitounabbas recognizingspeechwithlargelanguagemodels