Continual Learning is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available any more during training new ones.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Continual learning aims to improve the ability of modern learning systems to deal with non-stationary distributions, typically by attempting to learn a series of tasks sequentially.
We propose a transition prior to account for the temporal dependencies in streaming data and update the mixture online via sequential variational inference.
A major obstacle to developing artificial intelligence applications capable of true lifelong learning is that artificial neural networks quickly or catastrophically forget previously learned tasks when trained on a new one.
Continual learning has received a great deal of attention recently with several approaches being proposed.
Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on out-of-distribution data are improved, and continual-learning performance is boosted.
This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting.
Ranked #3 on Continual Learning on CUBS (Fine-grained 6 Tasks)
This work presents a method for adapting a single, fixed deep neural network to multiple tasks without affecting performance on already learned tasks.