Continual Learning Without Knowing Task Identities: Rethinking Occam's Razor

1 Jan 2021  ·  Tiffany Tuor, Shiqiang Wang, Kin Leung ·

Due to the catastrophic forgetting phenomenon of deep neural networks (DNNs), models trained in standard ways tend to forget what it has learned from previous tasks, especially when the new task is sufficiently different from the previous ones. To overcome this issue, various continual learning techniques have been developed in recent years, which, however, often suffer from a substantially increased model complexity and training time. In this paper, we illustrate that Occam's razor which suggests "entities should not be multiplied without necessity" carries to the development of efficient continual learning techniques. By proposing a relatively simple method based on Bayesian neural networks and model selection, we are able to significantly outperform various state-of-the-art techniques in terms of accuracy, model size, and running time. Moreover, unlike many existing works, our technique supports continual learning without task identity knowledge in training and inference phases, which is also known as task-free continual learning.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here