VisualSparta: Sparse Transformer Fragment-level Matching for Large-scale Text-to-Image Search

1 Jan 2021  ·  Xiaopeng Lu, Tiancheng Zhao, Kyusong Lee ·

Text-to-image retrieval is an essential task in multi-modal information retrieval, i.e. retrieving relevant images from a large and unlabelled image dataset given textual queries. In this paper, we propose VisualSparta, a novel text-to-image retrieval model that shows substantial improvement over existing models on both accuracy and efficiency... We show that VisualSparta is capable of outperforming all previous scalable methods in MSCOCO and Flickr30K. It also shows substantial retrieving speed advantages, i.e. for an index with 1 million images, VisualSparta gets over 391x speed up compared to standard vector search. Experiments show that this speed advantage even gets bigger for larger datasets because VisualSparta can be efficiently implemented as an inverted index. To the best of our knowledge, VisualSparta is the first transformer-based text-to-image retrieval model that can achieve real-time searching for very large dataset, with significant accuracy improvement compared to previous state-of-the-art methods. read more

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Cross-Modal Retrieval COCO 2014 VisualSparta Text-to-image R@1 44.4 # 2
Text-to-image R@10 82.4 # 2
Text-to-image R@5 72.8 # 2
Text-Image Retrieval Flickr30k VisualSparta recall@1 57.4 # 1
recall@5 82.0 # 1
recall@10 88.1 # 1
QPS 451.4 # 1
Image Retrieval Flickr30K 1K test VisualSparta R@1 57.4 # 2
R@10 88.1 # 5
R@5 82.0 # 3
Text-Image Retrieval MSCOCO-1k VisualSparta recall@1 68.2 # 1
recall@5 91.8 # 1
recall@10 96.3 # 1
QPS 451.4 # 1


No methods listed for this paper. Add relevant methods here