Continual Learning

819 papers with code • 29 benchmarks • 30 datasets

Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.

Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review

Libraries

Use these libraries to find Continual Learning models and implementations
23 papers
1,659
6 papers
683
6 papers
458
See all 8 libraries.

Latest papers with no code

Toward industrial use of continual learning : new metrics proposal for class incremental learning

no code yet • 10 Apr 2024

In this paper, we investigate continual learning performance metrics used in class incremental learning strategies for continual learning (CL) using some high performing methods.

Multi-Label Continual Learning for the Medical Domain: A Novel Benchmark

no code yet • 10 Apr 2024

This method aims to mitigate forgetting while adapting to new classes and domain shifts by combining the advantages of the Replay and Pseudo-Label methods and solving their limitations in the proposed scenario.

Hyperparameter Selection in Continual Learning

no code yet • 9 Apr 2024

In continual learning (CL) -- where a learner trains on a stream of data -- standard hyperparameter optimisation (HPO) cannot be applied, as a learner does not have access to all of the data at the same time.

On the Convergence of Continual Learning with Adaptive Methods

no code yet • 8 Apr 2024

One of the objectives of continual learning is to prevent catastrophic forgetting in learning multiple tasks sequentially, and the existing solutions have been driven by the conceptualization of the plasticity-stability dilemma.

Learn When (not) to Trust Language Models: A Privacy-Centric Adaptive Model-Aware Approach

no code yet • 4 Apr 2024

Despite their great success, the knowledge provided by the retrieval process is not always useful for improving the model prediction, since in some samples LLMs may already be quite knowledgeable and thus be able to answer the question correctly without retrieval.

Empowering Biomedical Discovery with AI Agents

no code yet • 3 Apr 2024

We envision 'AI scientists' as systems capable of skeptical learning and reasoning that empower biomedical research through collaborative agents that integrate machine learning tools with experimental platforms.

Continual Learning of Numerous Tasks from Long-tail Distributions

no code yet • 3 Apr 2024

In this paper, we investigate the performance of continual learning algorithms with a large number of tasks drawn from a task distribution that is long-tail in terms of task sizes.

Continual Learning for Smart City: A Survey

no code yet • 1 Apr 2024

We believe this survey can help relevant researchers quickly familiarize themselves with the current state of continual learning research used in smart city development and direct them to future research trends.

Make Continual Learning Stronger via C-Flat

no code yet • 1 Apr 2024

A general framework of C-Flat applied to all CL categories and a thorough comparison with loss minima optimizer and flat minima based CL approaches is presented in this paper, showing that our method can boost CL performance in almost all cases.

Rehearsal-Free Modular and Compositional Continual Learning for Language Models

no code yet • 31 Mar 2024

Continual learning aims at incrementally acquiring new knowledge while not forgetting existing knowledge.