Continual learning in low-rank orthogonal subspaces
In continual learning (CL), a learner is faced with a sequence of tasks, arriving one after the other, and the goal is to remember all the tasks once the continual learning experience is finished. The prior art in CL uses episodic memory, parameter regularization or extensible network structures to...
Main Authors: | , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
NIPS Proceedings
2020
|