Synthesized Classifiers for Zero-Shot Learning

CVPR 2016  ยท  Soravit Changpinyo, Wei-Lun Chao, Boqing Gong, Fei Sha ยท

Given semantic descriptions of object classes, zero-shot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this end, we introduce a set of "phantom" object classes whose coordinates live in both the semantic space and the model space. Serving as bases in a dictionary, they can be optimized from labeled data such that the synthesized real object classifiers achieve optimal discriminative performance. We demonstrate superior accuracy of our approach over the state of the art on four benchmark datasets for zero-shot learning, including the full ImageNet Fall 2011 dataset with more than 20,000 unseen classes.

PDF Abstract CVPR 2016 PDF CVPR 2016 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Few-Shot Image Classification AWA - 0-Shot Synthesised Classifier Accuracy 72.9% # 1
Few-Shot Image Classification CUB-200-2011 - 0-Shot Synthesised Classifier Top-1 Accuracy 54.7% # 2
Few-Shot Image Classification ImageNet - 0-Shot Synthesised Classifier Accuracy 1.5% # 4
Few-Shot Image Classification SUN - 0-Shot Synthesised Classifier Accuracy 62.7% # 1

Methods


No methods listed for this paper. Add relevant methods here