Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input

In this paper, we explore neural network models that learn to associate segments of spoken audio captions with the semantically relevant portions of natural images that they refer to. We demonstrate that these audio-visual associative localizations emerge from network-internal representations learne...

Full description

Bibliographic Details
Main Authors: Harwath, David F., Recasens, Adria, Suris Coll-Vinent, Didac, Chuang, Galen, Torralba, Antonio, Glass, James R
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Book
Language:English
Published: Springer International Publishing 2020
Online Access:https://hdl.handle.net/1721.1/123476
Description
Summary:In this paper, we explore neural network models that learn to associate segments of spoken audio captions with the semantically relevant portions of natural images that they refer to. We demonstrate that these audio-visual associative localizations emerge from network-internal representations learned as a by-product of training to perform an image-audio retrieval task. Our models operate directly on the image pixels and speech waveform, and do not rely on any conventional supervision in the form of labels, segmentations, or alignments between the modalities during training. We perform analysis using the Places 205 and ADE20k datasets demonstrating that our models implicitly learn semantically-coupled object and word detectors. Keywords: vision and language; sound; speech; convolutional networks; multimodal learning; unsupervised learning