Continual Learning
839 papers with code • 29 benchmarks • 30 datasets
Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.
Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review
Libraries
Use these libraries to find Continual Learning models and implementationsDatasets
Subtasks
Latest papers
A Unified and General Framework for Continual Learning
Extensive experiments on CL benchmarks and theoretical analysis demonstrate the effectiveness of the proposed refresh learning.
Predictive, scalable and interpretable knowledge tracing on structured domains
This requires estimates of both the learner's progress (''knowledge tracing''; KT), and the prerequisite structure of the learning domain (''knowledge mapping'').
Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters
Continual learning can empower vision-language models to continuously acquire new knowledge, without the need for access to the entire historical dataset.
Reconstruct before Query: Continual Missing Modality Learning with Decomposed Prompt Collaboration
Meanwhile, our RebQ leverages extensive multi-modal knowledge from pre-trained LMMs to reconstruct the data of missing modality.
Function-space Parameterization of Neural Networks for Sequential Learning
Our parameterization offers: (i) a way to scale function-space methods to large data sets via sparsification, (ii) retention of prior knowledge when access to past data is limited, and (iii) a mechanism to incorporate new data without retraining.
CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning
Large pre-trained VLMs like CLIP have demonstrated superior zero-shot recognition ability, and a number of recent studies leverage this ability to mitigate catastrophic forgetting in CL, but they focus on closed-set CL in a single domain dataset.
Open Continual Feature Selection via Granular-Ball Knowledge Transfer
To this end, the proposed CFS method combines the strengths of continual learning (CL) with granular-ball computing (GBC), which focuses on constructing a granular-ball knowledge base to detect unknown classes and facilitate the transfer of previously learned knowledge for further feature selection.
Simple and Scalable Strategies to Continually Pre-train Large Language Models
In this work, we show that a simple and scalable combination of learning rate (LR) re-warming, LR re-decaying, and replay of previous data is sufficient to match the performance of fully re-training from scratch on all available data, as measured by the final loss and the average score on several language model (LM) evaluation benchmarks.
Consistent Prompting for Rehearsal-Free Continual Learning
Specifically, all existing classifiers are exposed to prompt training, resulting in classifier consistency learning.
DAM: Dynamic Adapter Merging for Continual Video QA Learning
Our DAM model outperforms prior state-of-the-art continual learning approaches by 9. 1% while exhibiting 1. 9% less forgetting on 6 VidQA datasets spanning various domains.