Towards multilingual lexicon discovery from visually grounded speech

This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.

Detalhes bibliográficos
Autor principal: Azuh, Emmanuel Mensah
Outros Autores: James R. Glass and David Harwath.
Formato: Tese
Idioma:eng
Publicado em: Massachusetts Institute of Technology 2020
Assuntos:
Acesso em linha:https://hdl.handle.net/1721.1/124231
_version_ 1826205903333883904
author Azuh, Emmanuel Mensah
author2 James R. Glass and David Harwath.
author_facet James R. Glass and David Harwath.
Azuh, Emmanuel Mensah
author_sort Azuh, Emmanuel Mensah
collection MIT
description This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
first_indexed 2024-09-23T13:20:52Z
format Thesis
id mit-1721.1/124231
institution Massachusetts Institute of Technology
language eng
last_indexed 2024-09-23T13:20:52Z
publishDate 2020
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1242312020-03-25T03:20:36Z Towards multilingual lexicon discovery from visually grounded speech Azuh, Emmanuel Mensah James R. Glass and David Harwath. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Electrical Engineering and Computer Science. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 99-103). In this thesis, we present a method for the discovery of word-like units and their approximate translations from visually grounded speech across multiple languages. We first train a neural network model to map images and their spoken audio captions in both English and Hindi to a shared, multimodal embedding space. Next, we use this model to segment and cluster regions of the spoken captions which approximately correspond to words. Then, we exploit between-cluster similarities in the embedding space to associate English pseudo-word clusters with Hindi pseudo-word clusters, and show that many of these cluster pairings capture semantic translations between English and Hindi words. We present quantitative cross-lingual clustering results, as well as qualitative results in the form of a bilingual picture dictionary. Finally, we show the same analysis for a joint training using three languages at the same time, with Japanese as the third language. by Emmanuel Azuh Mensah. M. Eng. M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science 2020-03-24T15:35:27Z 2020-03-24T15:35:27Z 2019 2019 Thesis https://hdl.handle.net/1721.1/124231 1144932743 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 103 pages application/pdf Massachusetts Institute of Technology
spellingShingle Electrical Engineering and Computer Science.
Azuh, Emmanuel Mensah
Towards multilingual lexicon discovery from visually grounded speech
title Towards multilingual lexicon discovery from visually grounded speech
title_full Towards multilingual lexicon discovery from visually grounded speech
title_fullStr Towards multilingual lexicon discovery from visually grounded speech
title_full_unstemmed Towards multilingual lexicon discovery from visually grounded speech
title_short Towards multilingual lexicon discovery from visually grounded speech
title_sort towards multilingual lexicon discovery from visually grounded speech
topic Electrical Engineering and Computer Science.
url https://hdl.handle.net/1721.1/124231
work_keys_str_mv AT azuhemmanuelmensah towardsmultilinguallexicondiscoveryfromvisuallygroundedspeech