Multi-View Attention Network for Visual Dialog

29 Apr 2020  ·  Sungjin Park, Taesun Whang, Yeochan Yoon, Heuiseok Lim ·

Visual dialog is a challenging vision-language task in which a series of questions visually grounded by a given image are answered. To resolve the visual dialog task, a high-level understanding of various multimodal inputs (e.g., question, dialog history, and image) is required. Specifically, it is necessary for an agent to 1) determine the semantic intent of question and 2) align question-relevant textual and visual contents among heterogeneous modality inputs. In this paper, we propose Multi-View Attention Network (MVAN), which leverages multiple views about heterogeneous inputs based on attention mechanisms. MVAN effectively captures the question-relevant information from the dialog history with two complementary modules (i.e., Topic Aggregation and Context Matching), and builds multimodal representations through sequential alignment processes (i.e., Modality Alignment). Experimental results on VisDial v1.0 dataset show the effectiveness of our proposed model, which outperforms the previous state-of-the-art methods with respect to all evaluation metrics.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Dialog VisDial v0.9 val MVAN MRR 0.6765 # 11
Mean Rank 3.73 # 2
R@1 54.65 # 3
R@10 91.47 # 3
R@5 83.85 # 2
Visual Dialog Visual Dialog v1.0 test-std MVAN NDCG (x 100) 59.37 # 41
MRR (x 100) 64.84 # 15
R@1 51.45 # 16
R@5 81.12 # 18
R@10 90.65 # 16
Mean 3.97 # 64

Methods


No methods listed for this paper. Add relevant methods here