Learning digits via joint audio-visual representations

Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.

Bibliographic Details
Main Author: Kashyap, Karan
Other Authors: James Glass.
Format: Thesis
Language:eng
Published: Massachusetts Institute of Technology 2018
Subjects:
Online Access:http://hdl.handle.net/1721.1/113143
_version_ 1811078134649323520
author Kashyap, Karan
author2 James Glass.
author_facet James Glass.
Kashyap, Karan
author_sort Kashyap, Karan
collection MIT
description Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
first_indexed 2024-09-23T10:54:05Z
format Thesis
id mit-1721.1/113143
institution Massachusetts Institute of Technology
language eng
last_indexed 2024-09-23T10:54:05Z
publishDate 2018
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1131432019-04-11T03:26:30Z Learning digits via joint audio-visual representations Kashyap, Karan James Glass. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Electrical Engineering and Computer Science. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 59-60). Our goal is to explore models for language learning in the manner that humans learn languages as children. Namely, children do not have intermediary text transcriptions in correlating visual and audio inputs from the environment; rather, they directly make connections between what they see and what they hear, sometimes even across languages! In this thesis, we present weakly-supervised models for learning representations of numerical digits between two modalities: speech and images. We experiment with architectures of convolutional neural networks taking in spoken utterances of numerical digits and images of handwritten digits as inputs. In nearly all cases we randomly initialize network weights (without pre-training) and evaluate the model's ability to return a matching image for a spoken input or to identify the number of overlapping digits between an utterance and an image. We also provide some visuals as evidence that our models are truly learning correspondences between the two modalities. by Karan Kashyap. M. Eng. 2018-01-12T20:59:23Z 2018-01-12T20:59:23Z 2017 2017 Thesis http://hdl.handle.net/1721.1/113143 1017990448 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 60 pages application/pdf Massachusetts Institute of Technology
spellingShingle Electrical Engineering and Computer Science.
Kashyap, Karan
Learning digits via joint audio-visual representations
title Learning digits via joint audio-visual representations
title_full Learning digits via joint audio-visual representations
title_fullStr Learning digits via joint audio-visual representations
title_full_unstemmed Learning digits via joint audio-visual representations
title_short Learning digits via joint audio-visual representations
title_sort learning digits via joint audio visual representations
topic Electrical Engineering and Computer Science.
url http://hdl.handle.net/1721.1/113143
work_keys_str_mv AT kashyapkaran learningdigitsviajointaudiovisualrepresentations