Browse SoTA > Methodology > Continual Learning

Continual Learning

101 papers with code · Methodology

Continual Learning is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available any more during training new ones.

Source: Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation

Benchmarks

Greatest papers with code

Continual Unsupervised Representation Learning

NeurIPS 2019 deepmind/deepmind-research

Continual learning aims to improve the ability of modern learning systems to deal with non-stationary distributions, typically by attempting to learn a series of tasks sequentially.

CONTINUAL LEARNING OMNIGLOT UNSUPERVISED REPRESENTATION LEARNING

Continual Unsupervised Representation Learning

NeurIPS 2019 deepmind/deepmind-research

Continual learning aims to improve the ability of modern learning systems to deal with non-stationary distributions, typically by attempting to learn a series of tasks sequentially.

CONTINUAL LEARNING OMNIGLOT UNSUPERVISED REPRESENTATION LEARNING

Three scenarios for continual learning

15 Apr 2019GMvandeVen/continual-learning

Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning.

CONTINUAL LEARNING INCREMENTAL LEARNING

Generative replay with feedback connections as a general strategy for continual learning

27 Sep 2018GMvandeVen/continual-learning

A major obstacle to developing artificial intelligence applications capable of true lifelong learning is that artificial neural networks quickly or catastrophically forget previously learned tasks when trained on a new one.

CONTINUAL LEARNING

Gradient Episodic Memory for Continual Learning

NeurIPS 2017 facebookresearch/GradientEpisodicMemory

One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge.

CONTINUAL LEARNING

Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion

CVPR 2020 NVlabs/DeepInversion

We introduce DeepInversion, a new method for synthesizing images from the image distribution used to train a deep neural network.

CONTINUAL LEARNING NETWORK PRUNING TRANSFER LEARNING

Practical Deep Learning with Bayesian Principles

NeurIPS 2019 team-approx-bayes/dl-with-bayes

Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on out-of-distribution data are improved, and continual-learning performance is boosted.

CONTINUAL LEARNING DATA AUGMENTATION

Practical Deep Learning with Bayesian Principles

NeurIPS 2019 team-approx-bayes/dl-with-bayes

Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on out-of-distribution data are improved, and continual-learning performance is boosted.

CONTINUAL LEARNING DATA AUGMENTATION

Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines

30 Oct 2018GT-RIPL/Continual-Learning-Benchmark

Continual learning has received a great deal of attention recently with several approaches being proposed.

CONTINUAL LEARNING L2 REGULARIZATION

PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

CVPR 2018 arunmallya/packnet

This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting.

CONTINUAL LEARNING NETWORK PRUNING