Learning Sight from Sound: Ambient Sound Provides Supervision for Visual Learning

© 2018, Springer Science+Business Media, LLC, part of Springer Nature. The sound of crashing waves, the roar of fast-moving cars—sound conveys important information about the objects in our surroundings. In this work, we show that ambient sounds can be used as a supervisory signal for learning visua...

Full description

Bibliographic Details
Main Authors: Owens, Andrew, Wu, Jiajun, McDermott, Josh H, Freeman, William T, Torralba, Antonio
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:English
Published: Springer Nature America, Inc 2021
Online Access:https://hdl.handle.net/1721.1/135848
Description
Summary:© 2018, Springer Science+Business Media, LLC, part of Springer Nature. The sound of crashing waves, the roar of fast-moving cars—sound conveys important information about the objects in our surroundings. In this work, we show that ambient sounds can be used as a supervisory signal for learning visual models. To demonstrate this, we train a convolutional neural network to predict a statistical summary of the sound associated with a video frame. We show that, through this process, the network learns a representation that conveys information about objects and scenes. We evaluate this representation on several recognition tasks, finding that its performance is comparable to that of other state-of-the-art unsupervised learning methods. Finally, we show through visualizations that the network learns units that are selective to objects that are often associated with characteristic sounds. This paper extends an earlier conference paper, Owens et al. (in: European conference on computer vision, 2016b), with additional experiments and discussion.