Adaptive Subspaces for Few-Shot Learning

Object recognition requires a generalization capability to avoid overfitting, especially when the samples are extremely few. Generalization from limited samples, usually studied under the umbrella of meta-learning, equips learning techniques with the ability to adapt quickly in dynamical environments and proves to be an essential aspect of life long learning. In this paper, we provide a framework for few-shot learning by introducing dynamic classifiers that are constructed from few samples. A subspace method is exploited as the central block of a dynamic classifier. We will empirically show that such modelling leads to robustness against perturbations (e.g., outliers) and yields competitive results on the task of supervised and semi-supervised few-shot classification. We also develop a discriminative form which can boost the accuracy even further. Our code is available at https://github.com/chrysts/dsn_fewshot

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Few-Shot Image Classification CIFAR-FS 5-way (1-shot) Adaptive Subspace Network Accuracy 78 # 14
Few-Shot Image Classification CIFAR-FS 5-way (5-shot) Adaptive Subspace Network Accuracy 87.3 # 23
Few-Shot Image Classification Mini-Imagenet 5-way (1-shot) Adaptive Subspace Network Accuracy 67.09 # 45
Few-Shot Image Classification Mini-Imagenet 5-way (5-shot) Adaptive Subspace Network Accuracy 81.65 # 40
Few-Shot Image Classification Tiered ImageNet 5-way (1-shot) Adaptive Subspace Network Accuracy 68.44 # 37
Few-Shot Image Classification Tiered ImageNet 5-way (5-shot) Adaptive Subspace Network Accuracy 83.32 # 34

Methods


No methods listed for this paper. Add relevant methods here