Situation Recognition with Graph Neural Networks

We address the problem of recognizing situations in images. Given an image, the task is to predict the most salient verb (action), and fill its semantic roles such as who is performing the action, what is the source and target of the action, etc. Different verbs have different roles (e.g. attacking has weapon), and each role can take on many possible values (nouns). We propose a model based on Graph Neural Networks that allows us to efficiently capture joint dependencies between roles using neural networks defined on a graph. Experiments with different graph connectivities show that our approach that propagates information between roles significantly outperforms existing work, as well as multiple baselines. We obtain roughly 3-5% improvement over previous work in predicting the full situation. We also provide a thorough qualitative analysis of our model and influence of different roles in the verbs.

PDF Abstract ICCV 2017 PDF ICCV 2017 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Situation Recognition imSitu GraphNet Top-1 Verb 36.72 # 9
Top-1 Verb & Value 27.52 # 9
Top-5 Verbs 61.90 # 11
Top-5 Verbs & Value 45.39 # 11
Grounded Situation Recognition SWiG GraphNet Top-1 Verb 36.72 # 9
Top-1 Verb & Value 27.52 # 9
Top-5 Verbs 61.90 # 11
Top-5 Verbs & Value 45.39 # 11

Methods


No methods listed for this paper. Add relevant methods here