Meta-Learning
1186 papers with code • 4 benchmarks • 19 datasets
Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.
( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )
Libraries
Use these libraries to find Meta-Learning models and implementationsDatasets
Most implemented papers
Meta-Learning Representations for Continual Learning
We show that it is possible to learn naturally sparse representations that are more effective for online updating.
Meta-Learning with Implicit Gradients
By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer.
SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning
Few-shot learners aim to recognize new object classes based on a small number of labeled training examples.
TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second
We present TabPFN, a trained Transformer that can do supervised classification for small tabular datasets in less than a second, needs no hyperparameter tuning and is competitive with state-of-the-art classification methods.
Learning to Generalize: Meta-Learning for Domain Generalization
We propose a novel {meta-learning} method for domain generalization.
DiCE: The Infinitely Differentiable Monte-Carlo Estimator
Lastly, to match the first-order gradient under differentiation, SL treats part of the cost as a fixed sample, which we show leads to missing and wrong terms for estimators of higher-order derivatives.
Differentiable plasticity: training plastic neural networks with backpropagation
How can we build agents that keep learning from experience, quickly and efficiently, after their initial training?
Meta-learning with differentiable closed-form solvers
The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data.
Meta-Learning with Latent Embedding Optimization
We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space.
Learning to Design RNA
Designing RNA molecules has garnered recent interest in medicine, synthetic biology, biotechnology and bioinformatics since many functional RNA molecules were shown to be involved in regulatory processes for transcription, epigenetics and translation.