Unsupervised Meta-Learning through Latent-Space Interpolation in Generative Models

Unsupervised meta-learning approaches rely on synthetic meta-tasks that are created using techniques such as random selection, clustering and/or augmentation. Unfortunately, clustering and augmentation are domain-dependent, and thus they require either manual tweaking or expensive learning. In this work, we describe an approach that generates meta-tasks using generative models. A critical component is a novel approach of sampling from the latent space that generates objects grouped into synthetic classes forming the training and validation data of a meta-task. We find that the proposed approach, LAtent Space Interpolation Unsupervised Meta-learning (LASIUM), outperforms or is competitive with current unsupervised learning baselines on few-shot classification tasks on the most widely used benchmark datasets. In addition, the approach promises to be applicable without manual tweaking over a wider range of domains than previous approaches.

PDF Abstract ICLR 2021 PDF ICLR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Unsupervised Few-Shot Image Classification Mini-Imagenet 5-way (1-shot) LASIUM Accuracy 40.05 # 25
Unsupervised Few-Shot Image Classification Mini-Imagenet 5-way (5-shot) LASIUM Accuracy 54.56 # 25

Methods


No methods listed for this paper. Add relevant methods here