Neural attentions for natural language understanding and modeling
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Váldodahkki: | |
---|---|
Eará dahkkit: | |
Materiálatiipa: | Oahppočájánas |
Giella: | eng |
Almmustuhtton: |
Massachusetts Institute of Technology
2019
|
Fáttát: | |
Liŋkkat: | https://hdl.handle.net/1721.1/122760 |
_version_ | 1826189337929187328 |
---|---|
author | Luo, Hongyin. |
author2 | James Glass. |
author_facet | James Glass. Luo, Hongyin. |
author_sort | Luo, Hongyin. |
collection | MIT |
description | Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 |
first_indexed | 2024-09-23T08:13:29Z |
format | Thesis |
id | mit-1721.1/122760 |
institution | Massachusetts Institute of Technology |
language | eng |
last_indexed | 2024-09-23T08:13:29Z |
publishDate | 2019 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/1227602019-11-22T03:47:13Z Neural attentions for natural language understanding and modeling Luo, Hongyin. James Glass. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Electrical Engineering and Computer Science. Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 Cataloged from PDF version of thesis. Includes bibliographical references (pages 85-92). In this thesis, we explore the use of neural attention mechanisms for improving natural language representation learning, a fundamental concept for modern natural language processing. With the proposed attention algorithms, our model made significant improvements in both language modeling and natural language understanding tasks. We regard language modeling as a representation learning task that learns to align local word contexts and their following words. We explore the use of attention mechanisms for both the context and following words to improve the performance of language models, and measure perplexity improvements on classic language modeling tasks. To learn better representation of contexts, we use a self-attention mechanism with a convolutional neural network (CNN) to simulate long short-term memory networks (LSTMs). The model process sequential data in parallel and still achieves competitive performances. We also propose a phrase induction model and headword attention to learn the embedding of following phrases. The model is able to learn reasonable phrase segments and outperforms several state-of-the-art language models on different data sets. The approach outperformed AWD-LSTM model by reducing 2 perplexities on the Penn Treebank and Wikitext-2 data sets, and achieved new state-of-the-art performance on the Wikitext-103 data set with 17.4 perplexity. For language understanding tasks, we propose the use of a self-attention CNN for video question answering. The performance of this model is significantly higher than the baseline video retrieval engine. Finally, we also investigate an end-to-end co-reference resolution model by applying cross-sentence attentions to utilize knowledge in contextual data and learn better contextualized word and span embeddings. The model achieved 66.69% MAP[at]1, and 87.42% MAP[at]5 accuracy of video retrieval and 57.13% MAP[at]1, 80.75 MAP[at]5 accuracy of a moment detection task, significantly outperforming the baselines. The study is partly supported by Ford Motor Company by Hongyin Luo. S.M. S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science 2019-11-04T20:22:52Z 2019-11-04T20:22:52Z 2019 2019 Thesis https://hdl.handle.net/1721.1/122760 1124925471 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 92 pages application/pdf Massachusetts Institute of Technology |
spellingShingle | Electrical Engineering and Computer Science. Luo, Hongyin. Neural attentions for natural language understanding and modeling |
title | Neural attentions for natural language understanding and modeling |
title_full | Neural attentions for natural language understanding and modeling |
title_fullStr | Neural attentions for natural language understanding and modeling |
title_full_unstemmed | Neural attentions for natural language understanding and modeling |
title_short | Neural attentions for natural language understanding and modeling |
title_sort | neural attentions for natural language understanding and modeling |
topic | Electrical Engineering and Computer Science. |
url | https://hdl.handle.net/1721.1/122760 |
work_keys_str_mv | AT luohongyin neuralattentionsfornaturallanguageunderstandingandmodeling |