Continual Learning

839 papers with code • 29 benchmarks • 30 datasets

Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.

Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review

Libraries

Use these libraries to find Continual Learning models and implementations
23 papers
1,687
7 papers
710
7 papers
472
See all 8 libraries.

A Unified and General Framework for Continual Learning

joey-wang123/cl-refresh-learning 20 Mar 2024

Extensive experiments on CL benchmarks and theoretical analysis demonstrate the effectiveness of the proposed refresh learning.

5
20 Mar 2024

Predictive, scalable and interpretable knowledge tracing on structured domains

mlcolab/psi-kt 19 Mar 2024

This requires estimates of both the learner's progress (''knowledge tracing''; KT), and the prerequisite structure of the learning domain (''knowledge mapping'').

2
19 Mar 2024

Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters

jiazuoyu/moe-adapters4cl 18 Mar 2024

Continual learning can empower vision-language models to continuously acquire new knowledge, without the need for access to the entire historical dataset.

76
18 Mar 2024

Reconstruct before Query: Continual Missing Modality Learning with Decomposed Prompt Collaboration

tree-shu-zhao/rebq.pytorch 17 Mar 2024

Meanwhile, our RebQ leverages extensive multi-modal knowledge from pre-trained LMMs to reconstruct the data of missing modality.

5
17 Mar 2024

Function-space Parameterization of Neural Networks for Sequential Learning

AaltoML/sfr-experiments 16 Mar 2024

Our parameterization offers: (i) a way to scale function-space methods to large data sets via sparsification, (ii) retention of prior knowledge when access to past data is limited, and (iii) a mechanism to incorporate new data without retraining.

0
16 Mar 2024

CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning

YukunLi99/CoLeCLIP 15 Mar 2024

Large pre-trained VLMs like CLIP have demonstrated superior zero-shot recognition ability, and a number of recent studies leverage this ability to mitigate catastrophic forgetting in CL, but they focus on closed-set CL in a single domain dataset.

4
15 Mar 2024

Open Continual Feature Selection via Granular-Ball Knowledge Transfer

diadai/cfs 15 Mar 2024

To this end, the proposed CFS method combines the strengths of continual learning (CL) with granular-ball computing (GBC), which focuses on constructing a granular-ball knowledge base to detect unknown classes and facilitate the transfer of previously learned knowledge for further feature selection.

0
15 Mar 2024

Simple and Scalable Strategies to Continually Pre-train Large Language Models

eleutherai/gpt-neox 13 Mar 2024

In this work, we show that a simple and scalable combination of learning rate (LR) re-warming, LR re-decaying, and replay of previous data is sufficient to match the performance of fully re-training from scratch on all available data, as measured by the final loss and the average score on several language model (LM) evaluation benchmarks.

6,622
13 Mar 2024

Consistent Prompting for Rehearsal-Free Continual Learning

Zhanxin-Gao/CPrompt 13 Mar 2024

Specifically, all existing classifiers are exposed to prompt training, resulting in classifier consistency learning.

14
13 Mar 2024

DAM: Dynamic Adapter Merging for Continual Video QA Learning

klauscc/dam 13 Mar 2024

Our DAM model outperforms prior state-of-the-art continual learning approaches by 9. 1% while exhibiting 1. 9% less forgetting on 6 VidQA datasets spanning various domains.

8
13 Mar 2024