Meta-Learning

1186 papers with code • 4 benchmarks • 19 datasets

Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.

( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )

Libraries

Use these libraries to find Meta-Learning models and implementations

Most implemented papers

Meta-Learning Representations for Continual Learning

Khurramjaved96/mrcl NeurIPS 2019

We show that it is possible to learn naturally sparse representations that are more effective for online updating.

Meta-Learning with Implicit Gradients

aravindr93/imaml_dev NeurIPS 2019

By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer.

SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning

mileyan/simple_shot 12 Nov 2019

Few-shot learners aim to recognize new object classes based on a small number of labeled training examples.

TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second

automl/tabpfn 5 Jul 2022

We present TabPFN, a trained Transformer that can do supervised classification for small tabular datasets in less than a second, needs no hyperparameter tuning and is competitive with state-of-the-art classification methods.

Learning to Generalize: Meta-Learning for Domain Generalization

thuml/Transfer-Learning-Library 10 Oct 2017

We propose a novel {meta-learning} method for domain generalization.

DiCE: The Infinitely Differentiable Monte-Carlo Estimator

alshedivat/lola 14 Feb 2018

Lastly, to match the first-order gradient under differentiation, SL treats part of the cost as a fixed sample, which we show leads to missing and wrong terms for estimators of higher-order derivatives.

Differentiable plasticity: training plastic neural networks with backpropagation

uber-common/differentiable-plasticity ICML 2018

How can we build agents that keep learning from experience, quickly and efficiently, after their initial training?

Meta-learning with differentiable closed-form solvers

learnables/learn2learn ICLR 2019

The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data.

Meta-Learning with Latent Embedding Optimization

deepmind/leo ICLR 2019

We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space.

Learning to Design RNA

automl/learna ICLR 2019

Designing RNA molecules has garnered recent interest in medicine, synthetic biology, biotechnology and bioinformatics since many functional RNA molecules were shown to be involved in regulatory processes for transcription, epigenetics and translation.