Biasing Like Human: A Cognitive Bias Framework for Scene Graph Generation

17 Mar 2022  ·  Xiaoguang Chang, Teng Wang, Changyin Sun, Wenzhe Cai ·

Scene graph generation is a sophisticated task because there is no specific recognition pattern (e.g., "looking at" and "near" have no conspicuous difference concerning vision, whereas "near" could occur between entities with different morphology). Thus some scene graph generation methods are trapped into most frequent relation predictions caused by capricious visual features and trivial dataset annotations. Therefore, recent works emphasized the "unbiased" approaches to balance predictions for a more informative scene graph. However, human's quick and accurate judgments over relations between numerous objects should be attributed to "bias" (i.e., experience and linguistic knowledge) rather than pure vision. To enhance the model capability, inspired by the "cognitive bias" mechanism, we propose a novel 3-paradigms framework that simulates how humans incorporate the label linguistic features as guidance of vision-based representations to better mine hidden relation patterns and alleviate noisy visual propagation. Our framework is model-agnostic to any scene graph model. Comprehensive experiments prove our framework outperforms baseline modules in several metrics with minimum parameters increment and achieves new SOTA performance on Visual Genome dataset.

PDF Abstract

Datasets


Results from the Paper


 Ranked #1 on Predicate Classification on Visual Genome (mean Recall @20 metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Predicate Classification Visual Genome C-bias mean Recall @20 31.30 # 1
Scene Graph Generation Visual Genome C-bias mean Recall @20 11.63 # 1
mean Recall @100 17.24 # 3

Methods


No methods listed for this paper. Add relevant methods here