Rapid adaptation in online continual learning: are we evaluating it right?
We revisit the common practice of evaluating adaptation of Online Continual Learning (OCL) algorithms through the metric of online accuracy, which measures the accuracy of the model on the immediate next few samples. However, we show that this metric is unreliable, as even vacuous blind classifiers,...
Main Authors: | Hammoud, HAAK, Prabhu, A, Lim, S-N, Torr, PHS, Bibi, A, Ghanem, B |
---|---|
Format: | Conference item |
Language: | English |
Published: |
IEEE
2024
|
Similar Items
-
Real-time evaluation in online continual learning: a new hope
by: Ghunaim, Y, et al.
Published: (2023) -
Computationally budgeted continual learning: what does matter?
by: Prabhu, A, et al.
Published: (2023) -
Don’t FREAK out: a frequency-inspired approach to detecting backdoor poisoned samples in DNNs
by: Hammoud, HAAK, et al.
Published: (2023) -
SynthCLIP: are we ready for a fully synthetic CLIP training?
by: Hammoud, HAAK, et al.
Published: (2024) -
On pretraining data diversity for self-supervised learning
by: Hammoud, HAAK, et al.
Published: (2024)