Continual Learning
835 papers with code • 29 benchmarks • 30 datasets
Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.
Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review
Libraries
Use these libraries to find Continual Learning models and implementationsDatasets
Subtasks
Most implemented papers
Learning to Continuously Optimize Wireless Resource In Episodically Dynamic Environment
We propose to build the notion of continual learning (CL) into the modeling process of learning wireless systems, so that the learning model can incrementally adapt to the new episodes, {\it without forgetting} knowledge learned from the previous episodes.
Avalanche: an End-to-End Library for Continual Learning
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning.
Learning to Prompt for Continual Learning
The mainstream paradigm behind continual learning has been to adapt the model parameters to non-stationary data distributions, where catastrophic forgetting is the central challenge.
Training and Inference with Integers in Deep Neural Networks
Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics.
Efficient parametrization of multi-domain deep neural networks
A practical limitation of deep neural networks is their high degree of specialization to a single task and visual domain.
Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines
Continual learning has received a great deal of attention recently with several approaches being proposed.
Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition
Modern deep neural networks are well known to be brittle in the face of unknown data instances and recognition of the latter remains a challenge.
Latent Replay for Real-Time Continual Learning
Continual learning techniques, where complex models are incrementally trained on small batches of new data, can make the learning problem tractable even for CPU-only embedded devices enabling remarkable levels of adaptiveness and autonomy.
Dark Experience for General Continual Learning: a Strong, Simple Baseline
Continual Learning has inspired a plethora of approaches and evaluation settings; however, the majority of them overlooks the properties of a practical scenario, where the data stream cannot be shaped as a sequence of tasks and offline training is not viable.
Continual Learning in Recurrent Neural Networks
Here, we provide the first comprehensive evaluation of established CL methods on a variety of sequential data benchmarks.