GDumb: A simple approach that questions our progress in continual learning

We discuss a general formulation for the Continual Learning (CL) problem for classification—a learning task where a stream provides samples to a learner and the goal of the learner, depending on the samples it receives, is to continually upgrade its knowledge about the old classes and learn new ones...

Full description

Bibliographic Details
Main Authors: Prabhu, A, Torr, PHS, Dokania, PK
Format: Conference item
Language:English
Published: Springer International Publishing 2020
_version_ 1826276105013690368
author Prabhu, A
Torr, PHS
Dokania, PK
author_facet Prabhu, A
Torr, PHS
Dokania, PK
author_sort Prabhu, A
collection OXFORD
description We discuss a general formulation for the Continual Learning (CL) problem for classification—a learning task where a stream provides samples to a learner and the goal of the learner, depending on the samples it receives, is to continually upgrade its knowledge about the old classes and learn new ones. Our formulation takes inspiration from the open-set recognition problem where test scenarios do not necessarily belong to the training distribution. We also discuss various quirks and assumptions encoded in recently proposed approaches for CL. We argue that some oversimplify the problem to an extent that leaves it with very little practical importance, and makes it extremely easy to perform well on. To validate this, we propose GDumb that (1) greedily stores samples in memory as they come and; (2) at test time, trains a model from scratch using samples only in the memory. We show that even though GDumb is not specifically designed for CL problems, it obtains state-of-the-art accuracies (often with large margins) in almost all the experiments when compared to a multitude of recently proposed algorithms. Surprisingly, it outperforms approaches in CL formulations for which they were specifically designed. This, we believe, raises concerns regarding our progress in CL for classification. Overall, we hope our formulation, characterizations and discussions will help in designing realistically useful CL algorithms, and GDumb will serve as a strong contender for the same.
first_indexed 2024-03-06T23:08:58Z
format Conference item
id oxford-uuid:64d21a33-3792-40ae-ae82-becd87f32757
institution University of Oxford
language English
last_indexed 2024-03-06T23:08:58Z
publishDate 2020
publisher Springer International Publishing
record_format dspace
spelling oxford-uuid:64d21a33-3792-40ae-ae82-becd87f327572022-03-26T18:21:22ZGDumb: A simple approach that questions our progress in continual learningConference itemhttp://purl.org/coar/resource_type/c_5794uuid:64d21a33-3792-40ae-ae82-becd87f32757EnglishSymplectic ElementsSpringer International Publishing2020Prabhu, ATorr, PHSDokania, PKWe discuss a general formulation for the Continual Learning (CL) problem for classification—a learning task where a stream provides samples to a learner and the goal of the learner, depending on the samples it receives, is to continually upgrade its knowledge about the old classes and learn new ones. Our formulation takes inspiration from the open-set recognition problem where test scenarios do not necessarily belong to the training distribution. We also discuss various quirks and assumptions encoded in recently proposed approaches for CL. We argue that some oversimplify the problem to an extent that leaves it with very little practical importance, and makes it extremely easy to perform well on. To validate this, we propose GDumb that (1) greedily stores samples in memory as they come and; (2) at test time, trains a model from scratch using samples only in the memory. We show that even though GDumb is not specifically designed for CL problems, it obtains state-of-the-art accuracies (often with large margins) in almost all the experiments when compared to a multitude of recently proposed algorithms. Surprisingly, it outperforms approaches in CL formulations for which they were specifically designed. This, we believe, raises concerns regarding our progress in CL for classification. Overall, we hope our formulation, characterizations and discussions will help in designing realistically useful CL algorithms, and GDumb will serve as a strong contender for the same.
spellingShingle Prabhu, A
Torr, PHS
Dokania, PK
GDumb: A simple approach that questions our progress in continual learning
title GDumb: A simple approach that questions our progress in continual learning
title_full GDumb: A simple approach that questions our progress in continual learning
title_fullStr GDumb: A simple approach that questions our progress in continual learning
title_full_unstemmed GDumb: A simple approach that questions our progress in continual learning
title_short GDumb: A simple approach that questions our progress in continual learning
title_sort gdumb a simple approach that questions our progress in continual learning
work_keys_str_mv AT prabhua gdumbasimpleapproachthatquestionsourprogressincontinuallearning
AT torrphs gdumbasimpleapproachthatquestionsourprogressincontinuallearning
AT dokaniapk gdumbasimpleapproachthatquestionsourprogressincontinuallearning