Multimodal Convolutional Neural Networks for Matching Image and Sentence

ICCV 2015  ·  Lin Ma, Zhengdong Lu, Lifeng Shang, Hang Li ·

In this paper, we propose multimodal convolutional neural networks (m-CNNs) for matching image and sentence. Our m-CNN provides an end-to-end framework with convolutional architectures to exploit image representation, word composition, and the matching relations between the two modalities. More specifically, it consists of one image CNN encoding the image content, and one matching CNN learning the joint representation of image and sentence. The matching CNN composes words to different semantic fragments and learns the inter-modal relations between image and the composed fragments at different levels, thus fully exploit the matching relations between image and sentence. Experimental results on benchmark databases of bidirectional image and sentence retrieval demonstrate that the proposed m-CNNs can effectively capture the information necessary for image and sentence matching. Specifically, our proposed m-CNNs for bidirectional image and sentence retrieval on Flickr30K and Microsoft COCO databases achieve the state-of-the-art performances.

PDF Abstract ICCV 2015 PDF ICCV 2015 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Retrieval Flickr30K 1K test mCNN R@1 26.2 # 16
R@10 69.6 # 15
R@5 56.3 # 14

Methods


No methods listed for this paper. Add relevant methods here