VisualSparta: An Embarrassingly Simple Approach to Large-scale Text-to-Image Search with Weighted Bag-of-words

ACL 2021  ·  Xiaopeng Lu, Tiancheng Zhao, Kyusong Lee ·

Text-to-image retrieval is an essential task in cross-modal information retrieval, i.e., retrieving relevant images from a large and unlabelled dataset given textual queries. In this paper, we propose VisualSparta, a novel (Visual-text Sparse Transformer Matching) model that shows significant improvement in terms of both accuracy and efficiency. VisualSparta is capable of outperforming previous state-of-the-art scalable methods in MSCOCO and Flickr30K. We also show that it achieves substantial retrieving speed advantages, i.e., for a 1 million image index, VisualSparta using CPU gets ~391X speedup compared to CPU vector search and ~5.4X speedup compared to vector search with GPU acceleration. Experiments show that this speed advantage even gets bigger for larger datasets because VisualSparta can be efficiently implemented as an inverted index. To the best of our knowledge, VisualSparta is the first transformer-based text-to-image retrieval model that can achieve real-time searching for large-scale datasets, with significant accuracy improvement compared to previous state-of-the-art methods.

PDF Abstract ACL 2021 PDF ACL 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Cross-Modal Retrieval COCO 2014 VisualSparta Text-to-image R@1 44.4 # 23
Text-to-image R@10 82.4 # 23
Text-to-image R@5 72.8 # 24
Image Retrieval Flickr30k VisualSparta QPS 451.4 # 1
Recall@5 82.0 # 6
Recall@10 88.1 # 6
Recall@1 57.4 # 4
Image Retrieval Flickr30K 1K test VisualSparta R@1 57.4 # 4
R@10 88.1 # 8
R@5 82.0 # 6
Image Retrieval MS COCO VisualSparta Recall@10 96.3 # 2
QPS 451.4 # 1
recall@1 68.2 # 2
recall@5 91.8 # 1

Methods