Pic2Word: Mapping Pictures to Words for Zero-shot Composed Image Retrieval

In Composed Image Retrieval (CIR), a user combines a query image with text to describe their intended target. Existing methods rely on supervised learning of CIR models using labeled triplets consisting of the query image, text specification, and the target image. Labeling such triplets is expensive and hinders broad applicability of CIR. In this work, we propose to study an important task, Zero-Shot Composed Image Retrieval (ZS-CIR), whose goal is to build a CIR model without requiring labeled triplets for training. To this end, we propose a novel method, called Pic2Word, that requires only weakly labeled image-caption pairs and unlabeled image datasets to train. Unlike existing supervised CIR models, our model trained on weakly labeled or unlabeled datasets shows strong generalization across diverse ZS-CIR tasks, e.g., attribute editing, object composition, and domain conversion. Our approach outperforms several supervised CIR methods on the common CIR benchmark, CIRR and Fashion-IQ. Code will be made publicly available at https://github.com/google-research/composed_image_retrieval.

PDF Abstract CVPR 2023 PDF CVPR 2023 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Zero-Shot Composed Image Retrieval (ZS-CIR) CIRCO Pic2Word mAP@10 9.51 # 9
Zero-Shot Composed Image Retrieval (ZS-CIR) CIRR Pic2Word R@5 51.70 # 13
Zero-Shot Composed Image Retrieval (ZS-CIR) Fashion IQ Pic2Word (Recall@10+Recall@50)/2 34.20 # 11
Zero-Shot Composed Image Retrieval (ZS-CIR) ImageNet Pic2Word Average Recall 18.85 # 2
Zero-shot Image Retrieval ImageNet-R Pic2Word (Recall@10+Recall@50)/2 16.65 # 1
Zero-Shot Composed Image Retrieval (ZS-CIR) ImageNet-R Pic2Word (Recall@10+Recall@50)/2 16.65 # 2
Zero-Shot Composed Image Retrieval (ZS-CIR) MS COCO Pic2Word Actions Recall@5 24.8 # 2

Methods


No methods listed for this paper. Add relevant methods here