UNSUPERVISED LEARNING OF VISUAL STRUCTURE USING PREDICTIVE GENERATIVE NETWORKS

The ability to predict future states of the environment is a central pillar of intelligence. At its core, effective prediction requires an internal model of the world and an understanding of the rules by which the world changes. Here, we explore the internal models developed by deep neural networks...

Full description

Bibliographic Details
Main Authors: Lotter, William, Kreiman, Gabriel, Cox, David
Format: Technical Report
Language:en_US
Published: Center for Brains, Minds and Machines (CBMM), arXiv 2015
Subjects:
Online Access:http://hdl.handle.net/1721.1/100275
_version_ 1811079662060699648
author Lotter, William
Kreiman, Gabriel
Cox, David
author_facet Lotter, William
Kreiman, Gabriel
Cox, David
author_sort Lotter, William
collection MIT
description The ability to predict future states of the environment is a central pillar of intelligence. At its core, effective prediction requires an internal model of the world and an understanding of the rules by which the world changes. Here, we explore the internal models developed by deep neural networks trained using a loss based on predicting future frames in synthetic video sequences, using an Encoder-Recurrent-Decoder framework (Fragkiadaki et al., 2015). We first show that this architecture can achieve excellent performance in visual sequence prediction tasks, including state-of-the-art performance in a standard “bouncing balls” dataset (Sutskever et al., 2009). We then train on clips of out-of-the-plane rotations of computer-generated faces, using both mean-squared error and a generative adversarial loss (Goodfellow et al., 2014), extending the latter to a recurrent, conditional setting. Despite being trained end-to-end to predict only pixel-level information, our Predictive Generative Networks learn a representation of the latent variables of the underlying generative process. Importantly, we find that this representation is naturally tolerant to object transformations, and generalizes well to new tasks, such as classification of static images. Similar models trained solely with a reconstruction loss fail to generalize as effectively. We argue that prediction can serve as a powerful unsupervised loss for learning rich internal representations of high-level object features.
first_indexed 2024-09-23T11:18:40Z
format Technical Report
id mit-1721.1/100275
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T11:18:40Z
publishDate 2015
publisher Center for Brains, Minds and Machines (CBMM), arXiv
record_format dspace
spelling mit-1721.1/1002752019-04-11T08:14:57Z UNSUPERVISED LEARNING OF VISUAL STRUCTURE USING PREDICTIVE GENERATIVE NETWORKS Lotter, William Kreiman, Gabriel Cox, David Neural Networks Encoder-Recurrent-Decoder framework Vision Predictive Generative Networks Neuroscience The ability to predict future states of the environment is a central pillar of intelligence. At its core, effective prediction requires an internal model of the world and an understanding of the rules by which the world changes. Here, we explore the internal models developed by deep neural networks trained using a loss based on predicting future frames in synthetic video sequences, using an Encoder-Recurrent-Decoder framework (Fragkiadaki et al., 2015). We first show that this architecture can achieve excellent performance in visual sequence prediction tasks, including state-of-the-art performance in a standard “bouncing balls” dataset (Sutskever et al., 2009). We then train on clips of out-of-the-plane rotations of computer-generated faces, using both mean-squared error and a generative adversarial loss (Goodfellow et al., 2014), extending the latter to a recurrent, conditional setting. Despite being trained end-to-end to predict only pixel-level information, our Predictive Generative Networks learn a representation of the latent variables of the underlying generative process. Importantly, we find that this representation is naturally tolerant to object transformations, and generalizes well to new tasks, such as classification of static images. Similar models trained solely with a reconstruction loss fail to generalize as effectively. We argue that prediction can serve as a powerful unsupervised loss for learning rich internal representations of high-level object features. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF – 1231216. 2015-12-15T20:14:37Z 2015-12-15T20:14:37Z 2015-12-15 Technical Report Working Paper Other http://hdl.handle.net/1721.1/100275 arXiv:1511.06380v1 en_US CBMM Memo Series;040 Attribution-NonCommercial 3.0 United States http://creativecommons.org/licenses/by-nc/3.0/us/ application/pdf Center for Brains, Minds and Machines (CBMM), arXiv
spellingShingle Neural Networks
Encoder-Recurrent-Decoder framework
Vision
Predictive Generative Networks
Neuroscience
Lotter, William
Kreiman, Gabriel
Cox, David
UNSUPERVISED LEARNING OF VISUAL STRUCTURE USING PREDICTIVE GENERATIVE NETWORKS
title UNSUPERVISED LEARNING OF VISUAL STRUCTURE USING PREDICTIVE GENERATIVE NETWORKS
title_full UNSUPERVISED LEARNING OF VISUAL STRUCTURE USING PREDICTIVE GENERATIVE NETWORKS
title_fullStr UNSUPERVISED LEARNING OF VISUAL STRUCTURE USING PREDICTIVE GENERATIVE NETWORKS
title_full_unstemmed UNSUPERVISED LEARNING OF VISUAL STRUCTURE USING PREDICTIVE GENERATIVE NETWORKS
title_short UNSUPERVISED LEARNING OF VISUAL STRUCTURE USING PREDICTIVE GENERATIVE NETWORKS
title_sort unsupervised learning of visual structure using predictive generative networks
topic Neural Networks
Encoder-Recurrent-Decoder framework
Vision
Predictive Generative Networks
Neuroscience
url http://hdl.handle.net/1721.1/100275
work_keys_str_mv AT lotterwilliam unsupervisedlearningofvisualstructureusingpredictivegenerativenetworks
AT kreimangabriel unsupervisedlearningofvisualstructureusingpredictivegenerativenetworks
AT coxdavid unsupervisedlearningofvisualstructureusingpredictivegenerativenetworks