Meta-Learning

1186 papers with code • 4 benchmarks • 19 datasets

Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.

( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )

Libraries

Use these libraries to find Meta-Learning models and implementations

Learning to Defer to a Population: A Meta-Learning Approach

dvtailor/meta-l2d 5 Mar 2024

The learning to defer (L2D) framework allows autonomous systems to be safe and robust by allocating difficult decisions to a human expert.

0
05 Mar 2024

On Latency Predictors for Neural Architecture Search

abdelfattah-lab/nasflat_latency 4 Mar 2024

We then design a general latency predictor to comprehensively study (1) the predictor architecture, (2) NN sample selection methods, (3) hardware device representations, and (4) NN operation encoding schemes.

5
04 Mar 2024

Fast and Efficient Local Search for Genetic Programming Based Loss Function Learning

decadz/evolved-model-agnostic-loss 1 Mar 2024

In this paper, we develop upon the topic of loss function learning, an emergent meta-learning paradigm that aims to learn loss functions that significantly improve the performance of the models trained under them.

11
01 Mar 2024

VRP-SAM: SAM with Visual Reference Prompt

syp2ysy/vrp-sam 27 Feb 2024

In this paper, we propose a novel Visual Reference Prompt (VRP) encoder that empowers the Segment Anything Model (SAM) to utilize annotated reference images as prompts for segmentation, creating the VRP-SAM model.

22
27 Feb 2024

Reinforced In-Context Black-Box Optimization

songlei00/ribbo 27 Feb 2024

In this paper, we propose RIBBO, a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.

5
27 Feb 2024

Discovering Temporally-Aware Reinforcement Learning Algorithms

EmptyJackson/groove 8 Feb 2024

We propose a simple augmentation to two existing objective discovery approaches that allows the discovered algorithm to dynamically update its objective function throughout the agent's training procedure, resulting in expressive schedules and increased generalization across different training horizons.

20
08 Feb 2024

Is Mamba Capable of In-Context Learning?

automl/is_mamba_capable_of_icl 5 Feb 2024

State of the art foundation models such as GPT-4 perform surprisingly well at in-context learning (ICL), a variant of meta-learning concerning the learned ability to solve tasks during a neural network forward pass, exploiting contextual information provided as input to the model.

1
05 Feb 2024

Predicting Configuration Performance in Multiple Environments with Sequential Meta-learning

ideas-labo/sempl 5 Feb 2024

Through comparing with 15 state-of-the-art models under nine systems, our extensive experimental results demonstrate that SeMPL performs considerably better on 89% of the systems with up to 99% accuracy improvement, while being data-efficient, leading to a maximum of 3. 86x speedup.

0
05 Feb 2024

Symbol: Generating Flexible Black-Box Optimizers through Symbolic Equation Learning

gmc-drl/symbol 4 Feb 2024

Recent Meta-learning for Black-Box Optimization (MetaBBO) methods harness neural networks to meta-learn configurations of traditional black-box optimizers.

7
04 Feb 2024

Sample Weight Estimation Using Meta-Updates for Online Continual Learning

hamedhemati/omsi 29 Jan 2024

This is done by first estimating sample weight parameters for each sample in the mini-batch, then, updating the model with the adapted sample weights.

0
29 Jan 2024