More Classifiers, Less Forgetting: A Generic Multi-classifier Paradigm for Incremental Learning

Less Forgetting: A Generic Multi-classifier Paradigm for Incremental Learning","Overcoming catastrophic forgetting in neural networks is a long-standing and core research objective for incremental learning. Notable studies have shown regularization strategies enable the network to remember previously acquired knowledge devoid of heavy forgetting. Since those regularization strategies are mostly associated with classifier outputs, we propose a MUlti-Classifier (MUC) incremental learning paradigm that integrates an ensemble of auxiliary classifiers to estimate more effective regularization constraints. Additionally, we extend two common methods, focusing on parameter and activation regularization, from the conventional single-classifier paradigm to MUC. Our classifier ensemble promotes regularizing network parameters or activations when moving to learn the next task. Under the setting of task-agnostic evaluation, our experimental results on CIFAR-100 and Tiny ImageNet incremental benchmarks show that our method outperforms other baselines. Specifically, MUC obtains 3%-5% accuracy boost and 4%-5% decline of forgetting ratio, compared with MAS and LwF. Our code is available at https://github.com/Liuy8/MUC.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here