Meta-Learning
1194 papers with code • 4 benchmarks • 19 datasets
Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.
( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )
Libraries
Use these libraries to find Meta-Learning models and implementationsDatasets
Latest papers
Window Stacking Meta-Models for Clinical EEG Classification
Windowing is a common technique in EEG machine learning classification and other time series tasks.
Secrets of RLHF in Large Language Models Part II: Reward Modeling
We introduce a series of novel methods to mitigate the influence of incorrect and ambiguous preferences in the dataset and fully leverage high-quality preference data.
Selective-Memory Meta-Learning with Environment Representations for Sound Event Localization and Detection
In addition, we introduce environment representations to characterize different acoustic settings, enhancing the adaptability of our attenuation approach to various environments.
Adaptive FSS: A Novel Few-Shot Segmentation Framework via Prototype Enhancement
In this paper, we propose a novel framework based on the adapter mechanism, namely Adaptive FSS, which can efficiently adapt the existing FSS model to the novel classes.
Meta-Learning-Based Adaptive Stability Certificates for Dynamical Systems
This paper addresses the problem of Neural Network (NN) based adaptive stability certification in a dynamical system.
Personalized Federated Learning with Contextual Modulation and Meta-Learning
These findings highlight the potential of incorporating contextual information and meta-learning techniques into federated learning, paving the way for advancements in distributed machine learning paradigms.
Discovering modular solutions that generalize compositionally
This allows us to relate the problem of compositional generalization to that of identification of the underlying modules.
AutoXPCR: Automated Multi-Objective Model Selection for Time Series Forecasting
Our method clearly outperforms other model selection approaches - on average, it only requires 20% of computation costs for recommending models with 90% of the best-possible quality.
Meta-Learning with Versatile Loss Geometries for Fast Adaptation Using Mirror Descent
Utilizing task-invariant prior knowledge extracted from related tasks, meta-learning is a principled framework that empowers learning a new task especially when data records are limited.
XLand-MiniGrid: Scalable Meta-Reinforcement Learning Environments in JAX
Inspired by the diversity and depth of XLand and the simplicity and minimalism of MiniGrid, we present XLand-MiniGrid, a suite of tools and grid-world environments for meta-reinforcement learning research.