Continual Learning

821 papers with code • 29 benchmarks • 30 datasets

Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.

Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review

Libraries

Use these libraries to find Continual Learning models and implementations
23 papers
1,664
7 papers
687
6 papers
458
See all 8 libraries.

Most implemented papers

Gradient Episodic Memory for Continual Learning

facebookresearch/GradientEpisodicMemory NeurIPS 2017

One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge.

Generative replay with feedback connections as a general strategy for continual learning

GMvandeVen/continual-learning 27 Sep 2018

A major obstacle to developing artificial intelligence applications capable of true lifelong learning is that artificial neural networks quickly or catastrophically forget previously learned tasks when trained on a new one.

Rehearsal-Free Continual Learning over Small Non-I.I.D. Batches

ContinualAI/avalanche 8 Jul 2019

Ideally, continual learning should be triggered by the availability of short videos of single objects and performed on-line on on-board hardware with fine-grained updates.

Learning to Continually Learn

uvm-neurobotics-lab/ANML 21 Feb 2020

Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it.

Dataset Condensation with Gradient Matching

VICO-UoE/DatasetCondensation ICLR 2021

As the state-of-the-art machine learning methods in many fields rely on larger datasets, storing datasets and training models on them become significantly more expensive.

PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

arunmallya/packnet CVPR 2018

This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting.

Gradient based sample selection for online continual learning

rahafaljundi/Gradient-based-Sample-Selection NeurIPS 2019

To prevent forgetting, a replay buffer is usually employed to store the previous data for the purpose of rehearsal.

Radial Bayesian Neural Networks: Beyond Discrete Support In Large-Scale Bayesian Deep Learning

SebFar/radial_bnn 1 Jul 2019

The Radial BNN is motivated by avoiding a sampling problem in 'mean-field' variational inference (MFVI) caused by the so-called 'soap-bubble' pathology of multivariate Gaussians.

Training Binary Neural Networks using the Bayesian Learning Rule

team-approx-bayes/BayesBiNN ICML 2020

Our work provides a principled approach for training binary neural networks which justifies and extends existing approaches.

Understanding the Role of Training Regimes in Continual Learning

imirzadeh/stable-continual-learning NeurIPS 2020

However, there has been limited prior work extensively analyzing the impact that different training regimes -- learning rate, batch size, regularization method-- can have on forgetting.