Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?

23 Feb 2023  ·  Yang Chen, Hexiang Hu, Yi Luan, Haitian Sun, Soravit Changpinyo, Alan Ritter, Ming-Wei Chang ·

Pre-trained vision and language models have demonstrated state-of-the-art capabilities over existing tasks involving images and texts, including visual question answering. However, it remains unclear whether these models possess the capability to answer questions that are not only querying visual content but knowledge-intensive and information-seeking. In this study, we introduce InfoSeek, a visual question answering dataset tailored for information-seeking questions that cannot be answered with only common sense knowledge. Using InfoSeek, we analyze various pre-trained visual question answering models and gain insights into their characteristics. Our findings reveal that state-of-the-art pre-trained multi-modal models (e.g., PaLI-X, BLIP2, etc.) face challenges in answering visual information-seeking questions, but fine-tuning on the InfoSeek dataset elicits models to use fine-grained knowledge that was learned during their pre-training. Furthermore, we show that accurate visual entity recognition can be used to improve performance on InfoSeek by retrieving relevant documents, showing a significant space for improvement.

PDF Abstract

Datasets


Introduced in the Paper:

InfoSeek

Used in the Paper:

TyDiQA OVEN
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Question Answering (VQA) InfoSeek CLIP + FiD Accuracy 20.9 # 3
Visual Question Answering (VQA) InfoSeek CLIP + PaLM (540B) Accuracy 20.4 # 4
Visual Question Answering (VQA) InfoSeek PaLI Accuracy 19.7 # 5

Methods


No methods listed for this paper. Add relevant methods here