Continual Learning
835 papers with code • 29 benchmarks • 30 datasets
Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones.
If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation.
Source:
Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Three scenarios for continual learning
Lifelong Machine Learning
Continual lifelong learning with neural networks: A review
Libraries
Use these libraries to find Continual Learning models and implementationsDatasets
Subtasks
Latest papers with no code
Unlocking Robust Segmentation Across All Age Groups via Continual Learning
Most deep learning models in medical imaging are trained on adult data with unclear performance on pediatric images.
Adaptive Memory Replay for Continual Learning
This continual learning (CL) phenomenon has been extensively studied, but primarily in a setting where only a small amount of past data can be stored.
Graph Continual Learning with Debiased Lossless Memory Replay
Graph continual learning (GCL) tackles this problem by continually adapting GNNs to the expanded graph of the current task while maintaining the performance over the graph of previous tasks.
Watch Your Step: Optimal Retrieval for Continual Learning at Scale
One of the most widely used approaches in continual learning is referred to as replay.
Towards Practical Tool Usage for Continually Learning LLMs
Large language models (LLMs) show an innate skill for solving language based tasks.
AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees
Large language models (LLMs) are increasingly capable of completing knowledge intensive tasks by recalling information from a static pretraining corpus.
Realistic Continual Learning Approach using Pre-trained Models
Continual learning (CL) is crucial for evaluating adaptability in learning solutions to retain knowledge.
Remembering Transformer for Continual Learning
Neural networks encounter the challenge of Catastrophic Forgetting (CF) in continual learning, where new task knowledge interferes with previously learned knowledge.
Learning to Classify New Foods Incrementally Via Compressed Exemplars
Therefore, food image classification systems should adapt to and manage data that continuously evolves.
Sketch-Plan-Generalize: Continual Few-Shot Learning of Inductively Generalizable Spatial Concepts for Language-Guided Robot Manipulation
Our goal is to build embodied agents that can learn inductively generalizable spatial concepts in a continual manner, e. g, constructing a tower of a given height.