Learning visual models from paired audio-visual examples

Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.

Bibliographic Details
Main Author: Owens, Andrew (Andrew Hale)
Other Authors: William Freeman.
Format: Thesis
Language:eng
Published: Massachusetts Institute of Technology 2017
Subjects:
Online Access:http://hdl.handle.net/1721.1/107352
_version_ 1811093820900638720
author Owens, Andrew (Andrew Hale)
author2 William Freeman.
author_facet William Freeman.
Owens, Andrew (Andrew Hale)
author_sort Owens, Andrew (Andrew Hale)
collection MIT
description Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
first_indexed 2024-09-23T15:51:13Z
format Thesis
id mit-1721.1/107352
institution Massachusetts Institute of Technology
language eng
last_indexed 2024-09-23T15:51:13Z
publishDate 2017
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1073522019-04-10T12:40:11Z Learning visual models from paired audio-visual examples Owens, Andrew (Andrew Hale) William Freeman. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Electrical Engineering and Computer Science. Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016. Cataloged from PDF version of thesis. Includes bibliographical references (pages 93-104). From the clink of a mug placed onto a saucer to the bustle of a busy café, our days are filled with visual experiences that are accompanied by distinctive sounds. In this thesis, we show that these sounds can provide a rich training signal for learning visual models. First, we propose the task of predicting the sound that an object makes when struck as a way of studying physical interactions within a visual scene. We demonstrate this idea by training an algorithm to produce plausible soundtracks for videos in which people hit and scratch objects with a drumstick. Then, with human studies and automated evaluations on recognition tasks, we verify that the sounds produced by the algorithm convey information about actions and material properties. Second, we show that ambient audio - e.g., crashing waves, people speaking in a crowd - can also be used to learn visual models. We train a convolutional neural network to predict a statistical summary of the sounds that occur within a scene, and we demonstrate that the visual representation learned by the model conveys information about objects and scenes. by Andrew Owens. Ph. D. 2017-03-10T15:06:40Z 2017-03-10T15:06:40Z 2016 2016 Thesis http://hdl.handle.net/1721.1/107352 973329840 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 104 pages application/pdf Massachusetts Institute of Technology
spellingShingle Electrical Engineering and Computer Science.
Owens, Andrew (Andrew Hale)
Learning visual models from paired audio-visual examples
title Learning visual models from paired audio-visual examples
title_full Learning visual models from paired audio-visual examples
title_fullStr Learning visual models from paired audio-visual examples
title_full_unstemmed Learning visual models from paired audio-visual examples
title_short Learning visual models from paired audio-visual examples
title_sort learning visual models from paired audio visual examples
topic Electrical Engineering and Computer Science.
url http://hdl.handle.net/1721.1/107352
work_keys_str_mv AT owensandrewandrewhale learningvisualmodelsfrompairedaudiovisualexamples