Continual learning in low-rank orthogonal subspaces
In continual learning (CL), a learner is faced with a sequence of tasks, arriving one after the other, and the goal is to remember all the tasks once the continual learning experience is finished. The prior art in CL uses episodic memory, parameter regularization or extensible network structures to...
Main Authors: | Chaudhry, A, Khan, N, Dokania, PK, Torr, PHS |
---|---|
Format: | Conference item |
Language: | English |
Published: |
NIPS Proceedings
2020
|
Similar Items
-
GDumb: A simple approach that questions our progress in continual learning
by: Prabhu, A, et al.
Published: (2020) -
Discovering class-specific pixels for weakly-supervised semantic segmentation
by: Chaudhry, A, et al.
Published: (2017) -
RanDumb: random representations outperform online continually learned representations
by: Prabhu, A, et al.
Published: (2025) -
RanDumb: a simple approach that questions the efficacy of continual representation learning
by: Prabhu, A, et al.
Published: (2025) -
Finding a low-rank basis in a matrix subspace
by: Nakatsukasa, Y, et al.
Published: (2016)