Cross-Modal Self-Attention Network for Referring Image Segmentation

CVPR 2019  ·  Linwei Ye, Mrigank Rochan, Zhi Liu, Yang Wang ·

We consider the problem of referring image segmentation. Given an input image and a natural language expression, the goal is to segment the object referred by the language expression in the image. Existing works in this area treat the language expression and the input image separately in their representations. They do not sufficiently capture long-range correlations between these two modalities. In this paper, we propose a cross-modal self-attention (CMSA) module that effectively captures the long-range dependencies between linguistic and visual features. Our model can adaptively focus on informative words in the referring expression and important regions in the input image. In addition, we propose a gated multi-level fusion module to selectively integrate self-attentive cross-modal features corresponding to different levels in the image. This module controls the information flow of features at different levels. We validate the proposed approach on four evaluation datasets. Our proposed approach consistently outperforms existing state-of-the-art methods.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Referring Expression Segmentation RefCOCO testA CMSA Overall IoU 60.61 # 22
Referring Expression Segmentation RefCOCO+ testA CMSA Overall IoU 47.60 # 20
Referring Expression Segmentation RefCOCO testB CMSA Overall IoU 55.09 # 17
Referring Expression Segmentation RefCOCO+ test B CMSA Overall IoU 37.89 # 19
Referring Expression Segmentation RefCoCo val CMSA Overall IoU 58.32 # 23
Referring Expression Segmentation RefCOCO+ val CMSA Overall IoU 43.76 # 22
Referring Video Object Segmentation Refer-YouTube-VOS CMSA J&F 36.4 # 14
J 34.8 # 14
F 38.1 # 14

Methods


No methods listed for this paper. Add relevant methods here