GDumb: A simple approach that questions our progress in continual learning
We discuss a general formulation for the Continual Learning (CL) problem for classification—a learning task where a stream provides samples to a learner and the goal of the learner, depending on the samples it receives, is to continually upgrade its knowledge about the old classes and learn new ones...
Main Authors: | Prabhu, A, Torr, PHS, Dokania, PK |
---|---|
פורמט: | Conference item |
שפה: | English |
יצא לאור: |
Springer International Publishing
2020
|
פריטים דומים
-
RanDumb: a simple approach that questions the efficacy of continual representation learning
מאת: Prabhu, A, et al.
יצא לאור: (2025) -
RanDumb: random representations outperform online continually learned representations
מאת: Prabhu, A, et al.
יצא לאור: (2025) -
Continual learning in low-rank orthogonal subspaces
מאת: Chaudhry, A, et al.
יצא לאור: (2020) -
An embarrassingly simple approach to zero-shot learning
מאת: Romera-Paredes, B, et al.
יצא לאור: (2015) -
Progressive skeletonization: trimming more fat from a network at initialization
מאת: de Jorge, P, et al.
יצא לאור: (2020)