Dual-Path Convolutional Image-Text Embeddings with Instance Loss

15 Nov 2017  ยท  Zhedong Zheng, Liang Zheng, Michael Garrett, Yi Yang, Mingliang Xu, Yi-Dong Shen ยท

Matching images and sentences demands a fine understanding of both modalities. In this paper, we propose a new system to discriminatively embed the image and text to a shared visual-textual space. In this field, most existing works apply the ranking loss to pull the positive image / text pairs close and push the negative pairs apart from each other. However, directly deploying the ranking loss is hard for network learning, since it starts from the two heterogeneous features to build inter-modal relationship. To address this problem, we propose the instance loss which explicitly considers the intra-modal data distribution. It is based on an unsupervised assumption that each image / text group can be viewed as a class. So the network can learn the fine granularity from every image/text group. The experiment shows that the instance loss offers better weight initialization for the ranking loss, so that more discriminative embeddings can be learned. Besides, existing works usually apply the off-the-shelf features, i.e., word2vec and fixed visual feature. So in a minor contribution, this paper constructs an end-to-end dual-path convolutional network to learn the image and text representations. End-to-end learning allows the system to directly learn from the data and fully utilize the supervision. On two generic retrieval datasets (Flickr30k and MSCOCO), experiments demonstrate that our method yields competitive accuracy compared to state-of-the-art methods. Moreover, in language based person retrieval, we improve the state of the art by a large margin. The code has been made publicly available.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Cross-Modal Retrieval CUHK-PEDES Dual Path Text-to-image Medr 2 # 1
Text based Person Retrieval CUHK-PEDES Dual Path R@1 44.4 # 20
R@10 75.07 # 21
R@5 66.26 # 20
Cross-Modal Retrieval Flickr30k Dual-Path (ResNet) Image-to-text R@1 55.6 # 20
Image-to-text R@5 81.9 # 20
Cross-Modal Retrieval Flickr30k Dual-Path (ResNet) Image-to-text R@10 89.5 # 19
Text-to-image R@1 39.1 # 23
Text-to-image R@10 80.9 # 20
Text-to-image R@5 69.2 # 22
Cross-Modal Retrieval MSCOCO-1k Dual-path CNN Image-to-text R@1 41.2 # 2
Text-to-image R@1 25.3 # 1

Methods


No methods listed for this paper. Add relevant methods here