Meta-Learning
1186 papers with code • 4 benchmarks • 19 datasets
Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.
( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )
Libraries
Use these libraries to find Meta-Learning models and implementationsDatasets
Latest papers
Learning to Defer to a Population: A Meta-Learning Approach
The learning to defer (L2D) framework allows autonomous systems to be safe and robust by allocating difficult decisions to a human expert.
On Latency Predictors for Neural Architecture Search
We then design a general latency predictor to comprehensively study (1) the predictor architecture, (2) NN sample selection methods, (3) hardware device representations, and (4) NN operation encoding schemes.
Fast and Efficient Local Search for Genetic Programming Based Loss Function Learning
In this paper, we develop upon the topic of loss function learning, an emergent meta-learning paradigm that aims to learn loss functions that significantly improve the performance of the models trained under them.
VRP-SAM: SAM with Visual Reference Prompt
In this paper, we propose a novel Visual Reference Prompt (VRP) encoder that empowers the Segment Anything Model (SAM) to utilize annotated reference images as prompts for segmentation, creating the VRP-SAM model.
Reinforced In-Context Black-Box Optimization
In this paper, we propose RIBBO, a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
Discovering Temporally-Aware Reinforcement Learning Algorithms
We propose a simple augmentation to two existing objective discovery approaches that allows the discovered algorithm to dynamically update its objective function throughout the agent's training procedure, resulting in expressive schedules and increased generalization across different training horizons.
Is Mamba Capable of In-Context Learning?
State of the art foundation models such as GPT-4 perform surprisingly well at in-context learning (ICL), a variant of meta-learning concerning the learned ability to solve tasks during a neural network forward pass, exploiting contextual information provided as input to the model.
Predicting Configuration Performance in Multiple Environments with Sequential Meta-learning
Through comparing with 15 state-of-the-art models under nine systems, our extensive experimental results demonstrate that SeMPL performs considerably better on 89% of the systems with up to 99% accuracy improvement, while being data-efficient, leading to a maximum of 3. 86x speedup.
Symbol: Generating Flexible Black-Box Optimizers through Symbolic Equation Learning
Recent Meta-learning for Black-Box Optimization (MetaBBO) methods harness neural networks to meta-learn configurations of traditional black-box optimizers.
Sample Weight Estimation Using Meta-Updates for Online Continual Learning
This is done by first estimating sample weight parameters for each sample in the mini-batch, then, updating the model with the adapted sample weights.