DualVD: An Adaptive Dual Encoding Model for Deep Visual Understanding in Visual Dialogue

17 Nov 2019  ·  Xiaoze Jiang, Jing Yu, Zengchang Qin, Yingying Zhuang, Xingxing Zhang, Yue Hu, Qi Wu ·

Different from Visual Question Answering task that requires to answer only one question about an image, Visual Dialogue involves multiple questions which cover a broad range of visual content that could be related to any objects, relationships or semantics. The key challenge in Visual Dialogue task is thus to learn a more comprehensive and semantic-rich image representation which may have adaptive attentions on the image for variant questions. In this research, we propose a novel model to depict an image from both visual and semantic perspectives. Specifically, the visual view helps capture the appearance-level information, including objects and their relationships, while the semantic view enables the agent to understand high-level visual semantics from the whole image to the local regions. Futhermore, on top of such multi-view image features, we propose a feature selection framework which is able to adaptively capture question-relevant information hierarchically in fine-grained level. The proposed method achieved state-of-the-art results on benchmark Visual Dialogue datasets. More importantly, we can tell which modality (visual or semantic) has more contribution in answering the current question by visualizing the gate values. It gives us insights in understanding of human cognition in Visual Dialogue.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Dialog VisDial v0.9 val DualVD MRR 62.94 # 6
Mean Rank 4.17 # 7
R@1 48.64 # 11
R@10 89.94 # 7
R@5 80.89 # 7
Visual Dialog Visual Dialog v1.0 test-std DualVD NDCG (x 100) 56.32 # 62
MRR (x 100) 63.23 # 31
R@1 49.25 # 33
R@5 80.23 # 33
R@10 89.7 # 27
Mean 4.11 # 56

Methods