Summary: | During the last decade, we have witnessed tremendous progress in Machine Learning and especially the area of Deep Learning, a.k.a. “Learning Representations” (LearnRep for short). There is even an International Conference on Learning Representations.
Despite the huge success of LearnRep, there is a somewhat overlooked dimension of research that we would like to discuss in this report. We observe there is a chicken-and-egg problem between “learning” and “representations”. In the view of traditional Machine Learning and Deep Learning, “learning” is the “first-class citizen” — a learning system typically starts from scratch 2 and the learning process leads to good “representations”.
In contrast to the above view, we propose a concept “Representations That Learn” (RepLearn, or Meta Learning). one can start from a “representation” that is either learned, evolved or even “intelligently designed”. Unlike a system from scratch, this representation already has some functionalities (e.g., reasoning, memorizing, theory of mind, etc., depending on your task). In addition, such a representation must support a completely new level of learning — hence we have a “representation that learns”.
Furthermore, one can go more extreme in this direction and define “Hyper-learning” — multiple levels of repre- sentations are formed. Each level of representation supports a level of learning that leads to the representation of next level. Note that this is different from building multiple layers of deep neural networks. Instead, it is similar to how an operating system is implemented: an OS have at least three levels of representations: electrical signals on transistors, machine language, high-level language.
We believe RepLearn is similar to how human learns — many representations in our brain are formed before any learning happens (i.e., genetically coded). They serve as prior knowledge of the world and support one level of high-level learning (e.g., memorizing events, learning skills, etc.).
|