Continual Learning

835 papers with code • 29 benchmarks • 30 datasets

Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.

Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review

Libraries

Use these libraries to find Continual Learning models and implementations
23 papers
1,674
7 papers
700
7 papers
464
See all 8 libraries.

Most implemented papers

Learning to Continuously Optimize Wireless Resource In Episodically Dynamic Environment

Haoran-S/ICASSP2021 16 Nov 2020

We propose to build the notion of continual learning (CL) into the modeling process of learning wireless systems, so that the learning model can incrementally adapt to the new episodes, {\it without forgetting} knowledge learned from the previous episodes.

Avalanche: an End-to-End Library for Continual Learning

ContinualAI/avalanche 1 Apr 2021

Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning.

Learning to Prompt for Continual Learning

google-research/l2p CVPR 2022

The mainstream paradigm behind continual learning has been to adapt the model parameters to non-stationary data distributions, where catastrophic forgetting is the central challenge.

Training and Inference with Integers in Deep Neural Networks

boluoweifenda/WAGE ICLR 2018

Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics.

Efficient parametrization of multi-domain deep neural networks

srebuffi/residual_adapters CVPR 2018

A practical limitation of deep neural networks is their high degree of specialization to a single task and visual domain.

Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines

GT-RIPL/Continual-Learning-Benchmark 30 Oct 2018

Continual learning has received a great deal of attention recently with several approaches being proposed.

Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition

MrtnMndt/OCDVAE_ContinualLearning 28 May 2019

Modern deep neural networks are well known to be brittle in the face of unknown data instances and recognition of the latter remains a challenge.

Latent Replay for Real-Time Continual Learning

vlomonaco/ar1-pytorch 2 Dec 2019

Continual learning techniques, where complex models are incrementally trained on small batches of new data, can make the learning problem tractable even for CPU-only embedded devices enabling remarkable levels of adaptiveness and autonomy.

Dark Experience for General Continual Learning: a Strong, Simple Baseline

aimagelab/mammoth NeurIPS 2020

Continual Learning has inspired a plethora of approaches and evaluation settings; however, the majority of them overlooks the properties of a practical scenario, where the data stream cannot be shaped as a sequence of tasks and offline training is not viable.

Continual Learning in Recurrent Neural Networks

mariacer/cl_in_rnns ICLR 2021

Here, we provide the first comprehensive evaluation of established CL methods on a variety of sequential data benchmarks.