Learning feed-forward one-shot learners
One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propo...
Main Authors: | , , , , |
---|---|
Format: | Conference item |
Published: |
Massachusetts Institute of Technology Press
2016
|
_version_ | 1797096675715579904 |
---|---|
author | Bertinetto, L Henriques, J Valmadre, J Torr, P Vedaldi, A |
author_facet | Bertinetto, L Henriques, J Valmadre, J Torr, P Vedaldi, A |
author_sort | Bertinetto, L |
collection | OXFORD |
description | One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark. |
first_indexed | 2024-03-07T04:44:55Z |
format | Conference item |
id | oxford-uuid:d2e7d108-0f6d-46d6-8d16-1457027d3123 |
institution | University of Oxford |
last_indexed | 2024-03-07T04:44:55Z |
publishDate | 2016 |
publisher | Massachusetts Institute of Technology Press |
record_format | dspace |
spelling | oxford-uuid:d2e7d108-0f6d-46d6-8d16-1457027d31232022-03-27T08:07:32ZLearning feed-forward one-shot learnersConference itemhttp://purl.org/coar/resource_type/c_5794uuid:d2e7d108-0f6d-46d6-8d16-1457027d3123Symplectic Elements at OxfordMassachusetts Institute of Technology Press2016Bertinetto, LHenriques, JValmadre, JTorr, PVedaldi, AOne-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark. |
spellingShingle | Bertinetto, L Henriques, J Valmadre, J Torr, P Vedaldi, A Learning feed-forward one-shot learners |
title | Learning feed-forward one-shot learners |
title_full | Learning feed-forward one-shot learners |
title_fullStr | Learning feed-forward one-shot learners |
title_full_unstemmed | Learning feed-forward one-shot learners |
title_short | Learning feed-forward one-shot learners |
title_sort | learning feed forward one shot learners |
work_keys_str_mv | AT bertinettol learningfeedforwardoneshotlearners AT henriquesj learningfeedforwardoneshotlearners AT valmadrej learningfeedforwardoneshotlearners AT torrp learningfeedforwardoneshotlearners AT vedaldia learningfeedforwardoneshotlearners |