Representations That Learn vs. Learning Representations

During the last decade, we have witnessed tremendous progress in Machine Learning and especially the area of Deep Learning, a.k.a. “Learning Representations” (LearnRep for short). There is even an International Conference on Learning Representations. Despite the huge success of LearnRep, there is a...

Full description

Bibliographic Details
Main Authors: Liao, Qianli, Poggio, Tomaso
Other Authors: Center for Brains, Minds, and Machines
Format: Technical Report
Language:en_US
Published: 2018
Online Access:http://hdl.handle.net/1721.1/119834
_version_ 1826190052450893824
author Liao, Qianli
Poggio, Tomaso
author2 Center for Brains, Minds, and Machines
author_facet Center for Brains, Minds, and Machines
Liao, Qianli
Poggio, Tomaso
author_sort Liao, Qianli
collection MIT
description During the last decade, we have witnessed tremendous progress in Machine Learning and especially the area of Deep Learning, a.k.a. “Learning Representations” (LearnRep for short). There is even an International Conference on Learning Representations. Despite the huge success of LearnRep, there is a somewhat overlooked dimension of research that we would like to discuss in this report. We observe there is a chicken-and-egg problem between “learning” and “representations”. In the view of traditional Machine Learning and Deep Learning, “learning” is the “first-class citizen” — a learning system typically starts from scratch 2 and the learning process leads to good “representations”. In contrast to the above view, we propose a concept “Representations That Learn” (RepLearn, or Meta Learning). one can start from a “representation” that is either learned, evolved or even “intelligently designed”. Unlike a system from scratch, this representation already has some functionalities (e.g., reasoning, memorizing, theory of mind, etc., depending on your task). In addition, such a representation must support a completely new level of learning — hence we have a “representation that learns”. Furthermore, one can go more extreme in this direction and define “Hyper-learning” — multiple levels of repre- sentations are formed. Each level of representation supports a level of learning that leads to the representation of next level. Note that this is different from building multiple layers of deep neural networks. Instead, it is similar to how an operating system is implemented: an OS have at least three levels of representations: electrical signals on transistors, machine language, high-level language. We believe RepLearn is similar to how human learns — many representations in our brain are formed before any learning happens (i.e., genetically coded). They serve as prior knowledge of the world and support one level of high-level learning (e.g., memorizing events, learning skills, etc.).
first_indexed 2024-09-23T08:34:18Z
format Technical Report
id mit-1721.1/119834
institution Massachusetts Institute of Technology
language en_US
last_indexed 2025-03-10T07:09:09Z
publishDate 2018
record_format dspace
spelling mit-1721.1/1198342025-02-27T21:19:35Z Representations That Learn vs. Learning Representations Liao, Qianli Poggio, Tomaso Center for Brains, Minds, and Machines During the last decade, we have witnessed tremendous progress in Machine Learning and especially the area of Deep Learning, a.k.a. “Learning Representations” (LearnRep for short). There is even an International Conference on Learning Representations. Despite the huge success of LearnRep, there is a somewhat overlooked dimension of research that we would like to discuss in this report. We observe there is a chicken-and-egg problem between “learning” and “representations”. In the view of traditional Machine Learning and Deep Learning, “learning” is the “first-class citizen” — a learning system typically starts from scratch 2 and the learning process leads to good “representations”. In contrast to the above view, we propose a concept “Representations That Learn” (RepLearn, or Meta Learning). one can start from a “representation” that is either learned, evolved or even “intelligently designed”. Unlike a system from scratch, this representation already has some functionalities (e.g., reasoning, memorizing, theory of mind, etc., depending on your task). In addition, such a representation must support a completely new level of learning — hence we have a “representation that learns”. Furthermore, one can go more extreme in this direction and define “Hyper-learning” — multiple levels of repre- sentations are formed. Each level of representation supports a level of learning that leads to the representation of next level. Note that this is different from building multiple layers of deep neural networks. Instead, it is similar to how an operating system is implemented: an OS have at least three levels of representations: electrical signals on transistors, machine language, high-level language. We believe RepLearn is similar to how human learns — many representations in our brain are formed before any learning happens (i.e., genetically coded). They serve as prior knowledge of the world and support one level of high-level learning (e.g., memorizing events, learning skills, etc.). This work is supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. We thank Brando Miranda for useful comments and discussion after reading this draft in 06/2017. 2018-12-31T15:05:24Z 2018-12-31T15:05:24Z 2018-12-31 Technical Report Working Paper Other http://hdl.handle.net/1721.1/119834 en_US application/pdf
spellingShingle Liao, Qianli
Poggio, Tomaso
Representations That Learn vs. Learning Representations
title Representations That Learn vs. Learning Representations
title_full Representations That Learn vs. Learning Representations
title_fullStr Representations That Learn vs. Learning Representations
title_full_unstemmed Representations That Learn vs. Learning Representations
title_short Representations That Learn vs. Learning Representations
title_sort representations that learn vs learning representations
url http://hdl.handle.net/1721.1/119834
work_keys_str_mv AT liaoqianli representationsthatlearnvslearningrepresentations
AT poggiotomaso representationsthatlearnvslearningrepresentations