Browse SoTA > Methodology > Continual Learning

# Continual Learning Edit

146 papers with code · Methodology

Continual Learning is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available any more during training new ones.

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

# Mitigating Mode Collapse by Sidestepping Catastrophic Forgetting

Generative Adversarial Networks (GANs) are a class of generative models used for various applications, but they have been known to suffer from the mode collapse problem, in which some modes of the target distribution are ignored by the generator.

# A Unified Bayesian Framework for Discriminative and Generative Continual Learning

Two notable directions among the recent advances in continual learning with neural networks are (1) variational Bayes based regularization by learning priors from previous tasks, and, (2) learning the structure of deep networks to adapt to new tasks.

# Online Continual Learning Under Domain Shift

CIER employs an adversarial training to correct the shift in $P(X, Y)$ by matching $P(X|Y)$, which results in an invariant representation that can generalize to unseen domains during inference.

# Contextual Transformation Networks for Online Continual Learning

Continual learning methods with fixed architectures rely on a single network to learn models that can perform well on all tasks.

# Nonconvex Continual Learning with Episodic Memory

We also show that memory-based approaches have an inherent problem of overfitting to memory, which degrades the performance on previously learned tasks, namely catastrophic forgetting.

# Continual Learning Without Knowing Task Identities: Rethinking Occam's Razor

Due to the catastrophic forgetting phenomenon of deep neural networks (DNNs), models trained in standard ways tend to forget what it has learned from previous tasks, especially when the new task is sufficiently different from the previous ones.

# Highway-Connection Classifier Networks for Plastic yet Stable Continual Learning

Catastrophic forgetting occurs when a neural network is trained sequentially on multiple tasks – its weights will be continuously modified and as a result, the network will lose its ability in solving a previous task.

# Towards Learning to Remember in Meta Learning of Sequential Domains

However, a natural generalization to the sequential domain setting to avoid catastrophe forgetting has not been well investigated.

# GraphLog: A Benchmark for Measuring Logical Generalization in Graph Neural Networks

In this work, we study the logical generalization capabilities of GNNs by designing a benchmark suite grounded in first-order logic.

# Memory-Efficient Semi-Supervised Continual Learning: The World is its Own Replay Buffer

Our work investigates whether we can significantly reduce this memory budget by leveraging unlabeled data from an agent's environment in a realistic and challenging continual learning paradigm.