Continual Learning

827 papers with code • 29 benchmarks • 30 datasets

Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.

Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review

Libraries

Use these libraries to find Continual Learning models and implementations
23 papers
1,666
7 papers
694
6 papers
459
See all 8 libraries.

ECLIPSE: Efficient Continual Learning in Panoptic Segmentation with Visual Prompt Tuning

clovaai/ECLIPSE 29 Mar 2024

Panoptic segmentation, combining semantic and instance segmentation, stands as a cutting-edge computer vision task.

7
29 Mar 2024

CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models

srvcodes/clap4clip 28 Mar 2024

The deterministic nature of the existing finetuning methods makes them overlook the many possible interactions across the modalities and deems them unsafe for high-risk CL tasks requiring reliable uncertainty estimation.

6
28 Mar 2024

DS-AL: A Dual-Stream Analytic Learning for Exemplar-Free Class-Incremental Learning

ZHUANGHP/Analytic-continual-learning 26 Mar 2024

The compensation stream is governed by a Dual-Activation Compensation (DAC) module.

98
26 Mar 2024

G-ACIL: Analytic Learning for Exemplar-Free Generalized Class Incremental Learning

ZHUANGHP/Analytic-continual-learning 23 Mar 2024

The generalized CIL (GCIL) aims to address the CIL problem in a more real-world scenario, where incoming data have mixed data categories and unknown sample size distribution, leading to intensified forgetting.

98
23 Mar 2024

A Unified and General Framework for Continual Learning

joey-wang123/cl-refresh-learning 20 Mar 2024

Extensive experiments on CL benchmarks and theoretical analysis demonstrate the effectiveness of the proposed refresh learning.

5
20 Mar 2024

Predictive, scalable and interpretable knowledge tracing on structured domains

mlcolab/psi-kt 19 Mar 2024

This requires estimates of both the learner's progress (''knowledge tracing''; KT), and the prerequisite structure of the learning domain (''knowledge mapping'').

1
19 Mar 2024

Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters

jiazuoyu/moe-adapters4cl 18 Mar 2024

Continual learning can empower vision-language models to continuously acquire new knowledge, without the need for access to the entire historical dataset.

64
18 Mar 2024

Reconstruct before Query: Continual Missing Modality Learning with Decomposed Prompt Collaboration

tree-shu-zhao/rebq.pytorch 17 Mar 2024

Meanwhile, our RebQ leverages extensive multi-modal knowledge from pre-trained LMMs to reconstruct the data of missing modality.

3
17 Mar 2024

Function-space Parameterization of Neural Networks for Sequential Learning

AaltoML/sfr-experiments 16 Mar 2024

Our parameterization offers: (i) a way to scale function-space methods to large data sets via sparsification, (ii) retention of prior knowledge when access to past data is limited, and (iii) a mechanism to incorporate new data without retraining.

0
16 Mar 2024

CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning

YukunLi99/CoLeCLIP 15 Mar 2024

Large pre-trained VLMs like CLIP have demonstrated superior zero-shot recognition ability, and a number of recent studies leverage this ability to mitigate catastrophic forgetting in CL, but they focus on closed-set CL in a single domain dataset.

3
15 Mar 2024