ERNIE-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graph

30 Jun 2020  ·  Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang ·

We propose a knowledge-enhanced approach, ERNIE-ViL, which incorporates structured knowledge obtained from scene graphs to learn joint representations of vision-language. ERNIE-ViL tries to build the detailed semantic connections (objects, attributes of objects and relationships between objects) across vision and language, which are essential to vision-language cross-modal tasks. Utilizing scene graphs of visual scenes, ERNIE-ViL constructs Scene Graph Prediction tasks, i.e., Object Prediction, Attribute Prediction and Relationship Prediction tasks in the pre-training phase. Specifically, these prediction tasks are implemented by predicting nodes of different types in the scene graph parsed from the sentence. Thus, ERNIE-ViL can learn the joint representations characterizing the alignments of the detailed semantics across vision and language. After pre-training on large scale image-text aligned datasets, we validate the effectiveness of ERNIE-ViL on 5 cross-modal downstream tasks. ERNIE-ViL achieves state-of-the-art performances on all these tasks and ranks the first place on the VCR leaderboard with an absolute improvement of 3.7%.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Referring Expression Comprehension RefCoco+ ERNIE-ViL-large Val 75.95 # 9
Test A 82.07 # 7
Test B 66.88 # 7
Visual Question Answering (VQA) VCR (Q-AR) test ERNIE-ViL-large(ensemble of 15 models) Accuracy 70.5 # 2
Visual Question Answering (VQA) VCR (QA-R) test ERNIE-ViL-large(ensemble of 15 models) Accuracy 86.1 # 2
Visual Question Answering (VQA) VCR (Q-A) test ERNIE-ViL-large(ensemble of 15 models) Accuracy 81.6 # 2
Visual Question Answering (VQA) VQA v2 test-std ERNIE-ViL-single model overall 74.93 # 17
yes/no 90.83 # 8
number 56.79 # 10
other 65.24 # 8

Methods


No methods listed for this paper. Add relevant methods here