Rapid adaptation in online continual learning: are we evaluating it right?
We revisit the common practice of evaluating adaptation of Online Continual Learning (OCL) algorithms through the metric of online accuracy, which measures the accuracy of the model on the immediate next few samples. However, we show that this metric is unreliable, as even vacuous blind classifiers,...
Main Authors: | , , , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
IEEE
2024
|
_version_ | 1826312535329996800 |
---|---|
author | Hammoud, HAAK Prabhu, A Lim, S-N Torr, PHS Bibi, A Ghanem, B |
author_facet | Hammoud, HAAK Prabhu, A Lim, S-N Torr, PHS Bibi, A Ghanem, B |
author_sort | Hammoud, HAAK |
collection | OXFORD |
description | We revisit the common practice of evaluating adaptation of Online Continual Learning (OCL) algorithms through the metric of online accuracy, which measures the accuracy of the model on the immediate next few samples. However, we show that this metric is unreliable, as even vacuous blind classifiers, which do not use input images for prediction, can achieve unrealistically high online accuracy by exploiting spurious label correlations in the data stream. Our study reveals that existing OCL algorithms can also achieve high online accuracy, but perform poorly in retaining useful information, suggesting that they unintentionally learn spurious label correlations. To address this issue, we propose a novel metric for measuring adaptation based on the accuracy on the near-future samples, where spurious correlations are removed. We benchmark existing OCL approaches using our proposed metric on large-scale datasets under various computational budgets and find that better generalization can be achieved by retaining and reusing past seen information. We believe that our proposed metric can aid in the development of truly adaptive OCL methods. We provide code to reproduce our results at https://github.com/drimpossible/EvalOCL. |
first_indexed | 2024-04-09T03:55:53Z |
format | Conference item |
id | oxford-uuid:3abd0d05-e219-48a4-a9c4-70f40f1ec3d3 |
institution | University of Oxford |
language | English |
last_indexed | 2024-04-09T03:55:53Z |
publishDate | 2024 |
publisher | IEEE |
record_format | dspace |
spelling | oxford-uuid:3abd0d05-e219-48a4-a9c4-70f40f1ec3d32024-03-15T14:32:07ZRapid adaptation in online continual learning: are we evaluating it right?Conference itemhttp://purl.org/coar/resource_type/c_5794uuid:3abd0d05-e219-48a4-a9c4-70f40f1ec3d3EnglishSymplectic ElementsIEEE2024Hammoud, HAAKPrabhu, ALim, S-NTorr, PHSBibi, AGhanem, BWe revisit the common practice of evaluating adaptation of Online Continual Learning (OCL) algorithms through the metric of online accuracy, which measures the accuracy of the model on the immediate next few samples. However, we show that this metric is unreliable, as even vacuous blind classifiers, which do not use input images for prediction, can achieve unrealistically high online accuracy by exploiting spurious label correlations in the data stream. Our study reveals that existing OCL algorithms can also achieve high online accuracy, but perform poorly in retaining useful information, suggesting that they unintentionally learn spurious label correlations. To address this issue, we propose a novel metric for measuring adaptation based on the accuracy on the near-future samples, where spurious correlations are removed. We benchmark existing OCL approaches using our proposed metric on large-scale datasets under various computational budgets and find that better generalization can be achieved by retaining and reusing past seen information. We believe that our proposed metric can aid in the development of truly adaptive OCL methods. We provide code to reproduce our results at https://github.com/drimpossible/EvalOCL. |
spellingShingle | Hammoud, HAAK Prabhu, A Lim, S-N Torr, PHS Bibi, A Ghanem, B Rapid adaptation in online continual learning: are we evaluating it right? |
title | Rapid adaptation in online continual learning: are we evaluating it right? |
title_full | Rapid adaptation in online continual learning: are we evaluating it right? |
title_fullStr | Rapid adaptation in online continual learning: are we evaluating it right? |
title_full_unstemmed | Rapid adaptation in online continual learning: are we evaluating it right? |
title_short | Rapid adaptation in online continual learning: are we evaluating it right? |
title_sort | rapid adaptation in online continual learning are we evaluating it right |
work_keys_str_mv | AT hammoudhaak rapidadaptationinonlinecontinuallearningareweevaluatingitright AT prabhua rapidadaptationinonlinecontinuallearningareweevaluatingitright AT limsn rapidadaptationinonlinecontinuallearningareweevaluatingitright AT torrphs rapidadaptationinonlinecontinuallearningareweevaluatingitright AT bibia rapidadaptationinonlinecontinuallearningareweevaluatingitright AT ghanemb rapidadaptationinonlinecontinuallearningareweevaluatingitright |