On internal language representations in deep learning : an analysis of machine translation and speech recognition

Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.

Bibliographic Details
Main Author: Belinkov, Yonatan
Other Authors: James R. Glass.
Format: Thesis
Language:eng
Published: Massachusetts Institute of Technology 2018
Subjects:
Online Access:http://hdl.handle.net/1721.1/118079
_version_ 1811095150125907968
author Belinkov, Yonatan
author2 James R. Glass.
author_facet James R. Glass.
Belinkov, Yonatan
author_sort Belinkov, Yonatan
collection MIT
description Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
first_indexed 2024-09-23T16:11:26Z
format Thesis
id mit-1721.1/118079
institution Massachusetts Institute of Technology
language eng
last_indexed 2024-09-23T16:11:26Z
publishDate 2018
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1180792019-04-10T13:57:36Z On internal language representations in deep learning : an analysis of machine translation and speech recognition Analysis of machine translation and speech recognition Belinkov, Yonatan James R. Glass. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Electrical Engineering and Computer Science. Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018. Cataloged from PDF version of thesis. Includes bibliographical references (pages 183-228). Language technology has become pervasive in everyday life. Neural networks are a key component in this technology thanks to their ability to model large amounts of data. Contrary to traditional systems, models based on deep neural networks (a.k.a. deep learning) can be trained in an end-to-end fashion on input-output pairs, such as a sentence in one language and its translation in another language, or a speech utterance and its transcription. The end-to-end training paradigm simplifies the engineering process while giving the model flexibility to optimize for the desired task. This, however, often comes at the expense of model interpretability: understanding the role of different parts of the deep neural network is difficult, and such models are sometimes perceived as "black-box", hindering research efforts and limiting their utility to society. This thesis investigates what kind of linguistic information is represented in deep learning models for written and spoken language. In order to study this question, I develop a unified methodology for evaluating internal representations in neural networks, consisting of three steps: training a model on a complex end-to-end task; generating feature representations from different parts of the trained model; and training classifiers on simple supervised learning tasks using the representations. I demonstrate the approach on two core tasks in human language technology: machine translation and speech recognition. I perform a battery of experiments comparing different layers, modules, and architectures in end-to-end models that are trained on these tasks, and evaluate their quality at different linguistic levels. First, I study how neural machine translation models learn morphological information. Second, I compare lexical semantic and part-of-speech information in neural machine translation. Third, I investigate where syntactic and semantic structures are captured in these models. Finally, I explore how end-to-end automatic speech recognition models encode phonetic information. The analyses illuminate the inner workings of end-to-end machine translation and speech recognition systems, explain how they capture different language properties, and suggest potential directions for improving them. I also point to open questions concerning the representation of other linguistic properties, the investigation of different models, and the use of other analysis methods. Taken together, this thesis provides a comprehensive analysis of internal language representations in deep learning models. by Yonatan Belinkov. Ph. D. 2018-09-17T15:56:34Z 2018-09-17T15:56:34Z 2018 2018 Thesis http://hdl.handle.net/1721.1/118079 1051773125 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 228 pages application/pdf Massachusetts Institute of Technology
spellingShingle Electrical Engineering and Computer Science.
Belinkov, Yonatan
On internal language representations in deep learning : an analysis of machine translation and speech recognition
title On internal language representations in deep learning : an analysis of machine translation and speech recognition
title_full On internal language representations in deep learning : an analysis of machine translation and speech recognition
title_fullStr On internal language representations in deep learning : an analysis of machine translation and speech recognition
title_full_unstemmed On internal language representations in deep learning : an analysis of machine translation and speech recognition
title_short On internal language representations in deep learning : an analysis of machine translation and speech recognition
title_sort on internal language representations in deep learning an analysis of machine translation and speech recognition
topic Electrical Engineering and Computer Science.
url http://hdl.handle.net/1721.1/118079
work_keys_str_mv AT belinkovyonatan oninternallanguagerepresentationsindeeplearningananalysisofmachinetranslationandspeechrecognition
AT belinkovyonatan analysisofmachinetranslationandspeechrecognition