An Examination of the Compositionality of Large Generative Vision-Language Models

21 Aug 2023  ·  Teli Ma, Rong Li, Junwei Liang ·

With the success of Large Language Models (LLMs), many Generative Vision-Language Models (GVLMs) have been constructed via multimodal instruction tuning. However, the performance of GVLMs in multimodal compositional reasoning remains under-explored. In this paper, we examine both the evaluation metrics (VisualGPTScore, etc.) and current benchmarks for evaluating the compositionality of GVLMs. We identify the syntactical bias in current benchmarks, which is exploited by the linguistic capability of GVLMs. The bias renders VisualGPTScore an insufficient metric for assessing GVLMs. To combat this, we first introduce a SyntaxBias Score, leveraging LLMs to quantify such bias for mitigation. A challenging new task is subsequently added to evaluate the robustness of GVLMs against inherent inclination toward syntactical correctness. Using the bias-mitigated datasets and the new task, we propose a novel benchmark, namely SyntActically DE-biased benchmark (SADE). Our study provides an unbiased benchmark for the compositionality of GVLMs, facilitating future research in this direction (Code and dataset are available at https://github.com/TeleeMa/SADE).

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Reasoning Winoground LLaVA-7B (GPTScore) Text Score 25.50 # 83
Image Score 17.00 # 60
Group Score 10.50 # 66
Visual Reasoning Winoground MiniGPT-4-7B (VisualGPTScore) Text Score 23.25 # 91
Image Score 18.00 # 56
Group Score 9.50 # 71
Visual Reasoning Winoground MiniGPT-4-7B (GPTScore) Text Score 24.50 # 87
Image Score 21.75 # 45
Group Score 11.50 # 62
Visual Reasoning Winoground MiniGPT-4-7B (BERTScore) Text Score 14.00 # 110
Image Score 8.00 # 96
Group Score 2.75 # 101
Visual Reasoning Winoground LLaVA-7B (BERTScore) Text Score 13.50 # 111
Image Score 5.25 # 107
Group Score 2.25 # 103

Methods