Neural techniques for modeling visually grounded speech
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Huvudupphovsman: | |
---|---|
Övriga upphovsmän: | |
Materialtyp: | Lärdomsprov |
Språk: | eng |
Publicerad: |
Massachusetts Institute of Technology
2018
|
Ämnen: | |
Länkar: | http://hdl.handle.net/1721.1/119562 |
_version_ | 1826199524556668928 |
---|---|
author | Leidal, Kenneth (Kenneth Knute) |
author2 | James Glass and David Harwath. |
author_facet | James Glass and David Harwath. Leidal, Kenneth (Kenneth Knute) |
author_sort | Leidal, Kenneth (Kenneth Knute) |
collection | MIT |
description | Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018. |
first_indexed | 2024-09-23T11:21:43Z |
format | Thesis |
id | mit-1721.1/119562 |
institution | Massachusetts Institute of Technology |
language | eng |
last_indexed | 2024-09-23T11:21:43Z |
publishDate | 2018 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/1195622019-04-11T12:09:26Z Neural techniques for modeling visually grounded speech Leidal, Kenneth (Kenneth Knute) James Glass and David Harwath. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Electrical Engineering and Computer Science. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Cataloged from PDF version of thesis. Includes bibliographical references (pages 103-107). In this thesis, I explore state of the art techniques for using neural networks to learn semantically-rich representations for visual and audio data. In particular, I analyze and extend the model introduced by Harwath et al. (2016), a neural architecture which learns a non-linear similarity metric between images and audio captions using sampled margin rank loss. In Chapter 1, I provide a background on multimodal learning and motivate the need for further research in the area. In addition, I give an overview of Harwath et al. (2016)'s model, variants of which will be used throughout the rest of the thesis. In Chapter 2, I present a quantitative and qualitative analysis of the modality retrieval behavior of the state of the art architecture used by Harwath et al. (2016), identifying a bias towards certain examples and proposing a solution to counteract that bias. In Chapter 3, I introduce the property of modality invariance and explain a regularization technique I created to promote this property in learned semantic embedding spaces. In Chapter 4, I apply the architecture to a new dataset containing videos, which offers unique opportunities to include temporal visual data and ambient audio unavailable in images. In addition, the video domain presents new challenges, as the data density increases with the additional time dimension. I conclude with a discussion about multimodal learning, language acquisition, and unsupervised learning in general. by Kenneth Leidal. M. Eng. 2018-12-11T20:40:14Z 2018-12-11T20:40:14Z 2018 2018 Thesis http://hdl.handle.net/1721.1/119562 1076274574 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 107 pages application/pdf Massachusetts Institute of Technology |
spellingShingle | Electrical Engineering and Computer Science. Leidal, Kenneth (Kenneth Knute) Neural techniques for modeling visually grounded speech |
title | Neural techniques for modeling visually grounded speech |
title_full | Neural techniques for modeling visually grounded speech |
title_fullStr | Neural techniques for modeling visually grounded speech |
title_full_unstemmed | Neural techniques for modeling visually grounded speech |
title_short | Neural techniques for modeling visually grounded speech |
title_sort | neural techniques for modeling visually grounded speech |
topic | Electrical Engineering and Computer Science. |
url | http://hdl.handle.net/1721.1/119562 |
work_keys_str_mv | AT leidalkennethkennethknute neuraltechniquesformodelingvisuallygroundedspeech |