Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.
( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We introduce an approach to multilingual speech synthesis which uses the meta-learning concept of contextual parameter generation and produces natural-sounding multilingual speech using more languages and less training data than previous approaches.
In this paper, we tackle the problems of few-shot object detection and few-shot viewpoint estimation.
Ranked #1 on Few-Shot Object Detection on MS-COCO (10-shot)
On CIFAR-10 we match the baseline performance, and demonstrate for the first time that learning rate, momentum and weight decay schedules can be learned with gradients on a dataset of this size.
Developing algorithms that are able to generalize to a novel task given only a few labeled examples represents a fundamental challenge in closing the gap between machine- and human-level performance.
However, most meta-learning based recommendation approaches adopt model-agnostic meta-learning for parameter initialization, where the global sharing parameter may lead the model into local optima for some users.
Many successful deep learning architectures are equivariant to certain transformations in order to conserve parameters and improve generalization: most famously, convolution layers are equivariant to shifts of the input.
To more effectively generalize to new relations, in this paper we study the relationships between different relations and propose to leverage a global relation graph.
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.