Summary: | Class Incremental Learning (CIL) aims at learning a
classifier in a phase-by-phase manner, in which only data of
a subset of the classes are provided at each phase. Previous
works mainly focus on mitigating forgetting in phases after
the initial one. However, we find that improving CIL at its
initial phase is also a promising direction. Specifically, we
experimentally show that directly encouraging CIL Learner
at the initial phase to output similar representations as the
model jointly trained on all classes can greatly boost the
CIL performance. Motivated by this, we study the difference between a na¨ıvely-trained initial-phase model and the
oracle model. Specifically, since one major difference between these two models is the number of training classes,
we investigate how such difference affects the model representations. We find that, with fewer training classes, the
data representations of each class lie in a long and narrow
region; with more training classes, the representations of
each class scatter more uniformly. Inspired by this observation, we propose Class-wise Decorrelation (CwD) that effectively regularizes representations of each class to scatter
more uniformly, thus mimicking the model jointly trained
with all classes (i.e., the oracle model). Our CwD is simple
to implement and easy to plug into existing methods. Extensive experiments on various benchmark datasets show
that CwD consistently and significantly improves the performance of existing state-of-the-art methods by around 1%
to 3%.
|