Continual Learning
799 papers with code • 28 benchmarks • 30 datasets
Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.
Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review
Libraries
Use these libraries to find Continual Learning models and implementationsDatasets
Subtasks
Latest papers
Online Continual Learning For Interactive Instruction Following Agents
To take a step towards a more realistic embodied agent learning scenario, we propose two continual learning setups for embodied agents; learning new behaviors (Behavior Incremental Learning, Behavior-IL) and new environments (Environment Incremental Learning, Environment-IL) For the tasks, previous 'data prior' based continual learning methods maintain logits for the past tasks.
Continual All-in-One Adverse Weather Removal with Knowledge Replay on a Unified Network Structure
It considers the characteristics of the image restoration task with multiple degenerations in continual learning, and the knowledge for different degenerations can be shared and accumulated in the unified network structure.
Federated Learning of Socially Appropriate Agent Behaviours in Simulated Home Environments
In this paper, we present a novel FL benchmark that evaluates different strategies, using multi-label regression objectives, where each client individually learns to predict the social appropriateness of different robot actions while also sharing their learning with others.
Premonition: Using Generative Models to Preempt Future Data Changes in Continual Learning
We show here that the combination of a large language model and an image generation model can similarly provide useful premonitions as to how a continual learning challenge might develop over time.
On the Diminishing Returns of Width for Continual Learning
While deep neural networks have demonstrated groundbreaking performance in various settings, these models often suffer from \emph{catastrophic forgetting} when trained on new tasks in sequence.
LLMs in the Imaginarium: Tool Learning through Simulated Trial and Error
We find that existing LLMs, including GPT-4 and open-source LLMs specifically fine-tuned for tool use, only reach a correctness rate in the range of 30% to 60%, far from reliable use in practice.
Contrastive Continual Learning with Importance Sampling and Prototype-Instance Relation Distillation
Recently, because of the high-quality representations of contrastive learning methods, rehearsal-based contrastive continual learning has been proposed to explore how to continually learn transferable representation embeddings to avoid the catastrophic forgetting issue in traditional continual settings.
GUIDE: Guidance-based Incremental Learning with Diffusion Models
We introduce GUIDE, a novel continual learning approach that directs diffusion models to rehearse samples at risk of being forgotten.
Interactive Continual Learning: Fast and Slow Thinking
Drawing on Complementary Learning System theory, this paper presents a novel Interactive Continual Learning (ICL) framework, enabled by collaborative interactions among models of various sizes.
Recall-Oriented Continual Learning with Generative Adversarial Meta-Model
The stability-plasticity dilemma is a major challenge in continual learning, as it involves balancing the conflicting objectives of maintaining performance on previous tasks while learning new tasks.