RanDumb: a simple approach that questions the efficacy of continual representation learning
<p>We propose RanDumb to examine the efficacy of continual representation learning. RanDumb embeds raw pixels using a fixed random transform which approximates an RBF-Kernel, initialized before seeing any data, and learns a simple linear classifier on top. We present a surprising and consisten...
Main Authors: | , , , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
2024
|
_version_ | 1811139888410525696 |
---|---|
author | Prabhu, A Sinha, S Kumaraguru, P Torr, PHS Sener, O Dokania, PK |
author_facet | Prabhu, A Sinha, S Kumaraguru, P Torr, PHS Sener, O Dokania, PK |
author_sort | Prabhu, A |
collection | OXFORD |
description | <p>We propose RanDumb to examine the efficacy of continual representation learning. RanDumb embeds raw pixels using a fixed random transform which approximates an RBF-Kernel, initialized before seeing any data, and learns a simple linear classifier on top. We present a surprising and consistent finding: RanDumb significantly outperforms the continually learned representations using deep networks across numerous continual learning benchmarks, demonstrating the poor performance of representation learning in these scenarios. RanDumb stores no exemplars and performs a single pass over the data, processing one sample at a time. It complements GDumb [39], operating in a lowexemplar regime where GDumb has especially poor performance. We reach the same consistent conclusions when RanDumb is extended to scenarios with pretrained models replacing the random transform with pretrained feature extractor. Our investigation is both surprising and alarming as it questions our understanding of how to effectively design and train models that require efficient continual representation learning, and necessitates a principled reinvestigation of the widely explored problem formulation itself. Our code is available here.</p> |
first_indexed | 2024-09-25T04:13:14Z |
format | Conference item |
id | oxford-uuid:069c92b3-a5d3-4cf4-bd21-42052ac54652 |
institution | University of Oxford |
language | English |
last_indexed | 2024-09-25T04:13:14Z |
publishDate | 2024 |
record_format | dspace |
spelling | oxford-uuid:069c92b3-a5d3-4cf4-bd21-42052ac546522024-07-11T08:03:55ZRanDumb: a simple approach that questions the efficacy of continual representation learning Conference itemhttp://purl.org/coar/resource_type/c_5794uuid:069c92b3-a5d3-4cf4-bd21-42052ac54652EnglishSymplectic Elements2024Prabhu, ASinha, SKumaraguru, PTorr, PHSSener, ODokania, PK<p>We propose RanDumb to examine the efficacy of continual representation learning. RanDumb embeds raw pixels using a fixed random transform which approximates an RBF-Kernel, initialized before seeing any data, and learns a simple linear classifier on top. We present a surprising and consistent finding: RanDumb significantly outperforms the continually learned representations using deep networks across numerous continual learning benchmarks, demonstrating the poor performance of representation learning in these scenarios. RanDumb stores no exemplars and performs a single pass over the data, processing one sample at a time. It complements GDumb [39], operating in a lowexemplar regime where GDumb has especially poor performance. We reach the same consistent conclusions when RanDumb is extended to scenarios with pretrained models replacing the random transform with pretrained feature extractor. Our investigation is both surprising and alarming as it questions our understanding of how to effectively design and train models that require efficient continual representation learning, and necessitates a principled reinvestigation of the widely explored problem formulation itself. Our code is available here.</p> |
spellingShingle | Prabhu, A Sinha, S Kumaraguru, P Torr, PHS Sener, O Dokania, PK RanDumb: a simple approach that questions the efficacy of continual representation learning |
title | RanDumb: a simple approach that questions the efficacy of continual representation learning
|
title_full | RanDumb: a simple approach that questions the efficacy of continual representation learning
|
title_fullStr | RanDumb: a simple approach that questions the efficacy of continual representation learning
|
title_full_unstemmed | RanDumb: a simple approach that questions the efficacy of continual representation learning
|
title_short | RanDumb: a simple approach that questions the efficacy of continual representation learning
|
title_sort | randumb a simple approach that questions the efficacy of continual representation learning |
work_keys_str_mv | AT prabhua randumbasimpleapproachthatquestionstheefficacyofcontinualrepresentationlearning AT sinhas randumbasimpleapproachthatquestionstheefficacyofcontinualrepresentationlearning AT kumaragurup randumbasimpleapproachthatquestionstheefficacyofcontinualrepresentationlearning AT torrphs randumbasimpleapproachthatquestionstheefficacyofcontinualrepresentationlearning AT senero randumbasimpleapproachthatquestionstheefficacyofcontinualrepresentationlearning AT dokaniapk randumbasimpleapproachthatquestionstheefficacyofcontinualrepresentationlearning |