Logarithmic Continual Learning

We introduce a neural network architecture that logarithmically reduces the number of self-rehearsal steps in the generative rehearsal of continually learned models. In continual learning (CL), training samples come in subsequent tasks, and the trained model can access only a current task. Contempor...

Full description

Bibliographic Details
Main Authors: Wojciech Masarczyk, Pawel Wawrzynski, Daniel Marczak, Kamil Deja, Tomasz Trzcinski
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9934894/