Continual Learning

847 papers with code • 29 benchmarks • 30 datasets

Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.

Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review

Libraries

Use these libraries to find Continual Learning models and implementations
23 papers
1,699
7 papers
715
7 papers
478
See all 8 libraries.

Function-space Parameterization of Neural Networks for Sequential Learning

AaltoML/sfr-experiments 16 Mar 2024

Our parameterization offers: (i) a way to scale function-space methods to large data sets via sparsification, (ii) retention of prior knowledge when access to past data is limited, and (iii) a mechanism to incorporate new data without retraining.

1
16 Mar 2024

CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning

YukunLi99/CoLeCLIP 15 Mar 2024

Large pre-trained VLMs like CLIP have demonstrated superior zero-shot recognition ability, and a number of recent studies leverage this ability to mitigate catastrophic forgetting in CL, but they focus on closed-set CL in a single domain dataset.

4
15 Mar 2024

Open Continual Feature Selection via Granular-Ball Knowledge Transfer

diadai/cfs 15 Mar 2024

To this end, the proposed CFS method combines the strengths of continual learning (CL) with granular-ball computing (GBC), which focuses on constructing a granular-ball knowledge base to detect unknown classes and facilitate the transfer of previously learned knowledge for further feature selection.

0
15 Mar 2024

Simple and Scalable Strategies to Continually Pre-train Large Language Models

eleutherai/gpt-neox 13 Mar 2024

In this work, we show that a simple and scalable combination of learning rate (LR) re-warming, LR re-decaying, and replay of previous data is sufficient to match the performance of fully re-training from scratch on all available data, as measured by the final loss and the average score on several language model (LM) evaluation benchmarks.

6,635
13 Mar 2024

Consistent Prompting for Rehearsal-Free Continual Learning

Zhanxin-Gao/CPrompt 13 Mar 2024

Specifically, all existing classifiers are exposed to prompt training, resulting in classifier consistency learning.

14
13 Mar 2024

DAM: Dynamic Adapter Merging for Continual Video QA Learning

klauscc/dam 13 Mar 2024

Our DAM model outperforms prior state-of-the-art continual learning approaches by 9. 1% while exhibiting 1. 9% less forgetting on 6 VidQA datasets spanning various domains.

8
13 Mar 2024

FOCIL: Finetune-and-Freeze for Online Class Incremental Learning by Training Randomly Pruned Sparse Experts

muratonuryildirim/focil 13 Mar 2024

Class incremental learning (CIL) in an online continual learning setting strives to acquire knowledge on a series of novel classes from a data stream, using each data point only once for training.

2
13 Mar 2024

Online Continual Learning For Interactive Instruction Following Agents

snumprlab/cl-alfred 12 Mar 2024

To take a step towards a more realistic embodied agent learning scenario, we propose two continual learning setups for embodied agents; learning new behaviors (Behavior Incremental Learning, Behavior-IL) and new environments (Environment Incremental Learning, Environment-IL) For the tasks, previous 'data prior' based continual learning methods maintain logits for the past tasks.

10
12 Mar 2024

Continual All-in-One Adverse Weather Removal with Knowledge Replay on a Unified Network Structure

xiaojihh/cl_all-in-one 12 Mar 2024

It considers the characteristics of the image restoration task with multiple degenerations in continual learning, and the knowledge for different degenerations can be shared and accumulated in the unified network structure.

4
12 Mar 2024

Premonition: Using Generative Models to Preempt Future Data Changes in Continual Learning

cl-premonition/premonition 12 Mar 2024

We show here that the combination of a large language model and an image generation model can similarly provide useful premonitions as to how a continual learning challenge might develop over time.

2
12 Mar 2024