RanDumb: random representations outperform online continually learned representations

Continual learning has primarily focused on the issue of catastrophic forgetting and the associated stability-plasticity tradeoffs. However, little attention has been paid to the efficacy of continually learned representations, as representations are learned alongside classifiers throughout the lear...

Full description

Bibliographic Details
Main Authors: Prabhu, A, Sinha, S, Kumaraguru, P, Torr, PHS, Sener, O, Dokania, PK
Format: Conference item
Language:English
Published: NeurIPS 2025
_version_ 1824459255072161792
author Prabhu, A
Sinha, S
Kumaraguru, P
Torr, PHS
Sener, O
Dokania, PK
author_facet Prabhu, A
Sinha, S
Kumaraguru, P
Torr, PHS
Sener, O
Dokania, PK
author_sort Prabhu, A
collection OXFORD
description Continual learning has primarily focused on the issue of catastrophic forgetting and the associated stability-plasticity tradeoffs. However, little attention has been paid to the efficacy of continually learned representations, as representations are learned alongside classifiers throughout the learning process. Our primary contribution is empirically demonstrating that existing online continually trained deep networks produce inferior representations compared to a simple pre-defined random transforms. Our approach embeds raw pixels using a fixed random transform, approximating an RBF-Kernel initialized before any data is seen. We then train a simple linear classifier on top without storing any exemplars, processing one sample at a time in an online continual learning setting. This method, called RanDumb, significantly outperforms state-of-the-art continually learned representations across all standard online continual learning benchmarks. Our study reveals the significant limitations of representation learning, particularly in low-exemplar and online continual learning scenarios. Extending our investigation to popular exemplar-free scenarios with pretrained models, we find that training only a linear classifier on top of pretrained representations surpasses most continual fine-tuning and prompt-tuning strategies. Overall, our investigation challenges the prevailing assumptions about effective representation learning in the online continual learning.
first_indexed 2025-02-19T04:38:52Z
format Conference item
id oxford-uuid:03aea453-e982-4c1e-bcf7-fc7486a6fc48
institution University of Oxford
language English
last_indexed 2025-02-19T04:38:52Z
publishDate 2025
publisher NeurIPS
record_format dspace
spelling oxford-uuid:03aea453-e982-4c1e-bcf7-fc7486a6fc482025-02-13T11:51:38ZRanDumb: random representations outperform online continually learned representationsConference itemhttp://purl.org/coar/resource_type/c_5794uuid:03aea453-e982-4c1e-bcf7-fc7486a6fc48EnglishSymplectic ElementsNeurIPS2025Prabhu, ASinha, SKumaraguru, PTorr, PHSSener, ODokania, PKContinual learning has primarily focused on the issue of catastrophic forgetting and the associated stability-plasticity tradeoffs. However, little attention has been paid to the efficacy of continually learned representations, as representations are learned alongside classifiers throughout the learning process. Our primary contribution is empirically demonstrating that existing online continually trained deep networks produce inferior representations compared to a simple pre-defined random transforms. Our approach embeds raw pixels using a fixed random transform, approximating an RBF-Kernel initialized before any data is seen. We then train a simple linear classifier on top without storing any exemplars, processing one sample at a time in an online continual learning setting. This method, called RanDumb, significantly outperforms state-of-the-art continually learned representations across all standard online continual learning benchmarks. Our study reveals the significant limitations of representation learning, particularly in low-exemplar and online continual learning scenarios. Extending our investigation to popular exemplar-free scenarios with pretrained models, we find that training only a linear classifier on top of pretrained representations surpasses most continual fine-tuning and prompt-tuning strategies. Overall, our investigation challenges the prevailing assumptions about effective representation learning in the online continual learning.
spellingShingle Prabhu, A
Sinha, S
Kumaraguru, P
Torr, PHS
Sener, O
Dokania, PK
RanDumb: random representations outperform online continually learned representations
title RanDumb: random representations outperform online continually learned representations
title_full RanDumb: random representations outperform online continually learned representations
title_fullStr RanDumb: random representations outperform online continually learned representations
title_full_unstemmed RanDumb: random representations outperform online continually learned representations
title_short RanDumb: random representations outperform online continually learned representations
title_sort randumb random representations outperform online continually learned representations
work_keys_str_mv AT prabhua randumbrandomrepresentationsoutperformonlinecontinuallylearnedrepresentations
AT sinhas randumbrandomrepresentationsoutperformonlinecontinuallylearnedrepresentations
AT kumaragurup randumbrandomrepresentationsoutperformonlinecontinuallylearnedrepresentations
AT torrphs randumbrandomrepresentationsoutperformonlinecontinuallylearnedrepresentations
AT senero randumbrandomrepresentationsoutperformonlinecontinuallylearnedrepresentations
AT dokaniapk randumbrandomrepresentationsoutperformonlinecontinuallylearnedrepresentations