Browse SoTA > Methodology > Continual Learning

Continual Learning

146 papers with code · Methodology

Continual Learning is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available any more during training new ones.

Source: Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation

Benchmarks

Latest papers with code

Look-ahead Meta Learning for Continual Learning

NeurIPS 2020 montrealrobotics/La-MAML

The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks.

CONTINUAL LEARNING META-LEARNING

15
01 Dec 2020

Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks

NeurIPS 2020 ZixuanKe/CAT

To the best of our knowledge, no technique has been proposed to learn a sequence of mixed similar and dissimilar tasks that can deal with forgetting and also transfer knowledge forward and backward.

CONTINUAL LEARNING TRANSFER LEARNING

3
01 Dec 2020

BinPlay: A Binary Latent Autoencoder for Generative Replay Continual Learning

25 Nov 2020danielm1405/BinPlay

We introduce a binary latent space autoencoder architecture to rehearse training samples for the continual learning of neural networks.

CONTINUAL LEARNING

0
25 Nov 2020

Energy-Based Models for Continual Learning

ICLR 2021 ShuangLI59/ebm-continual-learning

We motivate Energy-Based Models (EBMs) as a promising model class for continual learning problems.

CONTINUAL LEARNING

2
24 Nov 2020

Learning to Continuously Optimize Wireless Resource In Episodically Dynamic Environment

16 Nov 2020Haoran-S/SPAWC2017

We propose to build the notion of continual learning (CL) into the modeling process of learning wireless systems, so that the learning model can incrementally adapt to the new episodes, {\it without forgetting} knowledge learned from the previous episodes.

CONTINUAL LEARNING FAIRNESS

70
16 Nov 2020

Continual Learning of Control Primitives: Skill Discovery via Reset-Games

10 Nov 2020siddharthverma314/clcp-neurips-2020

Reinforcement learning has the potential to automate the acquisition of behavior in complex settings, but in order for it to be successfully deployed, a number of practical challenges must be addressed.

CONTINUAL LEARNING

10
10 Nov 2020

Meta-Learning for Natural Language Understanding under Continual Learning Framework

3 Nov 2020lexili24/NLUProject

Neural network has been recognized with its accomplishments on tackling various natural language understanding (NLU) tasks.

CONTINUAL LEARNING META-LEARNING NATURAL LANGUAGE UNDERSTANDING

2
03 Nov 2020

AbdomenCT-1K: Is Abdominal Organ Segmentation A Solved Problem?

28 Oct 2020JunMa11/AbdomenCT-1K

With the unprecedented developments in deep learning, automatic segmentation of main abdominal organs (i. e., liver, kidney, and spleen) seems to be a solved problem as the state-of-the-art (SOTA) methods have achieved comparable results with inter-observer variability on existing benchmark datasets.

CONTINUAL LEARNING PANCREAS SEGMENTATION

6
28 Oct 2020

A Combinatorial Perspective on Transfer Learning

NeurIPS 2020 deepmind/deepmind-research

Our main postulate is that the combination of task segmentation, modular learning and memory-based ensembling can give rise to generalization on an exponentially growing number of unseen tasks.

CONTINUAL LEARNING TRANSFER LEARNING

3,394
23 Oct 2020

Continual Learning in Low-rank Orthogonal Subspaces

NeurIPS 2020 arslan-chaudhry/orthog_subspace

In continual learning (CL), a learner is faced with a sequence of tasks, arriving one after the other, and the goal is to remember all the tasks once the continual learning experience is finished.

CONTINUAL LEARNING

4
22 Oct 2020