Open Vocabulary Multi-Label Classification with Dual-Modal Decoder on Aligned Visual-Textual Features

19 Aug 2022  ·  Shichao Xu, Yikang Li, Jenhao Hsiao, Chiuman Ho, Zhu Qi ·

In computer vision, multi-label recognition are important tasks with many real-world applications, but classifying previously unseen labels remains a significant challenge. In this paper, we propose a novel algorithm, Aligned Dual moDality ClaSsifier (ADDS), which includes a Dual-Modal decoder (DM-decoder) with alignment between visual and textual features, for open-vocabulary multi-label classification tasks. Then we design a simple and yet effective method called Pyramid-Forwarding to enhance the performance for inputs with high resolutions. Moreover, the Selective Language Supervision is applied to further enhance the model performance. Extensive experiments conducted on several standard benchmarks, NUS-WIDE, ImageNet-1k, ImageNet-21k, and MS-COCO, demonstrate that our approach significantly outperforms previous methods and provides state-of-the-art performance for open-vocabulary multi-label classification, conventional multi-label classification and an extreme case called single-to-multi label classification where models trained on single-label datasets (ImageNet-1k, ImageNet-21k) are tested on multi-label ones (MS-COCO and NUS-WIDE).

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multi-label zero-shot learning ImageNet-1k to MSCOCO ADDS mAP 67.10 # 1
Multi-Label Classification MS-COCO ADDS(ViT-L-336, resolution 1344) mAP 93.54 # 1
Multi-Label Classification MS-COCO ADDS(ViT-L-336, resolution 336) mAP 91.76 # 3
Multi-Label Classification MS-COCO ADDS(ViT-L-336, resolution 640) mAP 93.41 # 2
Multi-label zero-shot learning NUS-WIDE ADDS (ViT-L-336, resolution 336) mAP 39.01 # 2
Multi-label zero-shot learning NUS-WIDE ADDS (ViT-B-32, resolution 224) mAP 36.56 # 4

Methods


No methods listed for this paper. Add relevant methods here