Browse SoTA > Methodology > Continual Learning

Continual Learning

146 papers with code · Methodology

Continual Learning is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available any more during training new ones.

Source: Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation

Benchmarks

Latest papers without code

Mitigating Mode Collapse by Sidestepping Catastrophic Forgetting

ICLR 2021

Generative Adversarial Networks (GANs) are a class of generative models used for various applications, but they have been known to suffer from the mode collapse problem, in which some modes of the target distribution are ignored by the generator.

CONTINUAL LEARNING

A Unified Bayesian Framework for Discriminative and Generative Continual Learning

ICLR 2021

Two notable directions among the recent advances in continual learning with neural networks are (1) variational Bayes based regularization by learning priors from previous tasks, and, (2) learning the structure of deep networks to adapt to new tasks.

CONTINUAL LEARNING

Online Continual Learning Under Domain Shift

ICLR 2021

CIER employs an adversarial training to correct the shift in $P(X, Y)$ by matching $P(X|Y)$, which results in an invariant representation that can generalize to unseen domains during inference.

CONTINUAL LEARNING

Contextual Transformation Networks for Online Continual Learning

ICLR 2021

Continual learning methods with fixed architectures rely on a single network to learn models that can perform well on all tasks.

CONTINUAL LEARNING TRANSFER LEARNING

Nonconvex Continual Learning with Episodic Memory

ICLR 2021

We also show that memory-based approaches have an inherent problem of overfitting to memory, which degrades the performance on previously learned tasks, namely catastrophic forgetting.

CONTINUAL LEARNING IMAGE CLASSIFICATION

Continual Learning Without Knowing Task Identities: Rethinking Occam's Razor

ICLR 2021

Due to the catastrophic forgetting phenomenon of deep neural networks (DNNs), models trained in standard ways tend to forget what it has learned from previous tasks, especially when the new task is sufficiently different from the previous ones.

CONTINUAL LEARNING MODEL SELECTION

Highway-Connection Classifier Networks for Plastic yet Stable Continual Learning

ICLR 2021

Catastrophic forgetting occurs when a neural network is trained sequentially on multiple tasks – its weights will be continuously modified and as a result, the network will lose its ability in solving a previous task.

CONTINUAL LEARNING

Towards Learning to Remember in Meta Learning of Sequential Domains

ICLR 2021

However, a natural generalization to the sequential domain setting to avoid catastrophe forgetting has not been well investigated.

CONTINUAL LEARNING META-LEARNING

GraphLog: A Benchmark for Measuring Logical Generalization in Graph Neural Networks

ICLR 2021

In this work, we study the logical generalization capabilities of GNNs by designing a benchmark suite grounded in first-order logic.

CONTINUAL LEARNING KNOWLEDGE GRAPHS RELATIONAL REASONING

Memory-Efficient Semi-Supervised Continual Learning: The World is its Own Replay Buffer

ICLR 2021

Our work investigates whether we can significantly reduce this memory budget by leveraging unlabeled data from an agent's environment in a realistic and challenging continual learning paradigm.

CONTINUAL LEARNING