Transferring Nonlinear Representations using Gaussian Processes with a Shared Latent Space

When a series of problems are related, representations derived from learning earlier tasks may be useful in solving later problems. In this paper we propose a novel approach to transfer learning with low-dimensional, non-linear latent spaces. We show how such representations can be jointly learned a...

Full description

Bibliographic Details
Main Authors: Urtasun, Raquel, Quattoni, Ariadna, Lawrence, Neil, Darrell, Trevor
Other Authors: Trevor Darrell
Published: 2008
Online Access:http://hdl.handle.net/1721.1/41517
Description
Summary:When a series of problems are related, representations derived from learning earlier tasks may be useful in solving later problems. In this paper we propose a novel approach to transfer learning with low-dimensional, non-linear latent spaces. We show how such representations can be jointly learned across multiple tasks in a Gaussian Process framework. When transferred to new tasks with relatively few training examples, learning can be faster and/or more accurate. Experiments on digit recognition and newsgroup classification tasks show significantly improved performance when compared to baseline performance with a representation derived from a semi-supervised learning approach or with a discriminative approach that uses only the target data.