LPN: Language-guided Prototypical Network for few-shot classification

4 Jul 2023  ·  Kaihui Cheng, Chule Yang, Xiao Liu, Naiyang Guan, Zhiyuan Wang ·

Few-shot classification aims to adapt to new tasks with limited labeled examples. To fully use the accessible data, recent methods explore suitable measures for the similarity between the query and support images and better high-dimensional features with meta-training and pre-training strategies. However, the potential of multi-modality information has barely been explored, which may bring promising improvement for few-shot classification. In this paper, we propose a Language-guided Prototypical Network (LPN) for few-shot classification, which leverages the complementarity of vision and language modalities via two parallel branches to improve the classifier. Concretely, to introduce language modality with limited samples in the visual task, we leverage a pre-trained text encoder to extract class-level text features directly from class names while processing images with a conventional image encoder. Then, we introduce a language-guided decoder to obtain text features corresponding to each image by aligning class-level features with visual features. Additionally, we utilize class-level features and prototypes to build a refined prototypical head, which generates robust prototypes in the text branch for follow-up measurement. Furthermore, we leverage the class-level features to align the visual features, capturing more class-relevant visual features. Finally, we aggregate the visual and text logits to calibrate the deviation of a single modality, enhancing the overall performance. Extensive experiments demonstrate the competitiveness of LPN against state-of-the-art methods on benchmark datasets.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods