NeuSyRE: Neuro-Symbolic Visual Understanding and Reasoning Framework based on Scene Graph Enrichment

Semantic Web 2023  ยท  M. Jaleed Khan, John Breslin, Edward Curry ยท

Neuro-symbolic hybrid approaches are inevitable for seamless high-level understanding and reasoning about visual scenes. Scene Graph Generation (SGG) is a symbolic image representation approach based on deep neural networks (DNN) that involves predicting objects, their attributes, and pairwise visual relationships in images to create scene graphs, which are utilized in downstream visual reasoning. The crowdsourced training datasets used in SGG are highly imbalanced, which results in biased SGG results. The vast number of possible triplets makes it challenging to collect sufficient training samples for every visual concept or relationship. To address these challenges, we propose augmenting the typical data-driven SGG approach with common sense knowledge to enhance the expressiveness and autonomy of visual understanding and reasoning. We present a loosely-coupled neuro-symbolic visual understanding and reasoning framework that employs a DNN-based pipeline for object detection and multi-modal pairwise relationship prediction for scene graph generation and leverages common sense knowledge in heterogenous knowledge graphs to enrich scene graphs for improved downstream reasoning. A comprehensive evaluation is performed on multiple standard datasets, including Visual Genome and Microsoft COCO, in which the proposed approach outperformed the state-of-the-art SGG methods in terms of relationship recall scores, i.e. Recall@K and mean Recall@K, as well as the state-of-the-art scene graph-based image captioning methods in terms of SPICE and CIDEr scores with comparable BLEU, ROGUE and METEOR scores. As a result of enrichment, the qualitative results showed improved expressiveness of scene graphs, resulting in more intuitive and meaningful caption generation using scene graphs. Our results validate the effectiveness of enriching scene graphs with common sense knowledge using heterogeneous knowledge graphs. This work provides a baseline for future research in knowledge-enhanced visual understanding and reasoning.

PDF

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Image Captioning MS-COCO NeuSyRE SPICE 23.8 # 1
CIDEr 131.4 # 1
BLEU-1 79.1 # 1
BLEU-4 37.6 # 1
Test ROGUE-L 57.7 # 1
METEOR 28.5 # 1
Scene Graph Generation MS-COCO NeuSyRE R@100 38.5 # 1
R@50 36.3 # 1
R@20 27.9 # 1
mR@100 12.8 # 1
mR@50 11.6 # 1
mR@20 9.2 # 1
Scene Graph Generation Visual Genome NeuSyRE R@100 39.1 # 2
mR@100 12.6 # 1
mR@50 10.9 # 3

Methods


No methods listed for this paper. Add relevant methods here