Visual Question Answering (VQA)

767 papers with code • 62 benchmarks • 112 datasets

Visual Question Answering (VQA) is a task in computer vision that involves answering questions about an image. The goal of VQA is to teach machines to understand the content of an image and answer questions about it in natural language.

Image Source: visualqa.org

Libraries

Use these libraries to find Visual Question Answering (VQA) models and implementations

Latest papers with no code

Unified Scene Representation and Reconstruction for 3D Large Language Models

no code yet • 19 Apr 2024

Existing approaches extract point clouds either from ground truth (GT) geometry or 3D scenes reconstructed by auxiliary models.

TextSquare: Scaling up Text-Centric Visual Instruction Tuning

no code yet • 19 Apr 2024

Text-centric visual question answering (VQA) has made great strides with the development of Multimodal Large Language Models (MLLMs), yet open-source models still fall short of leading models like GPT4V and Gemini, partly due to a lack of extensive, high-quality instruction tuning data.

PDF-MVQA: A Dataset for Multimodal Information Retrieval in PDF-based Visual Question Answering

no code yet • 19 Apr 2024

Document Question Answering (QA) presents a challenge in understanding visually-rich documents (VRD), particularly those dominated by lengthy textual content like research journal articles.

Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models

no code yet • 18 Apr 2024

On text benchmarks, Core not only performs competitively to other frontier models on a set of well-established benchmarks (e. g. MMLU, GSM8K) but also outperforms GPT4-0613 on human evaluation.

MedThink: Explaining Medical Visual Question Answering via Multimodal Decision-Making Rationale

no code yet • 18 Apr 2024

Moreover, we design a novel framework which finetunes lightweight pretrained generative models by incorporating medical decision-making rationales into the training process.

Find The Gap: Knowledge Base Reasoning For Visual Question Answering

no code yet • 16 Apr 2024

2) How do task-specific and LLM-based models perform in the integration of visual and external knowledge, and multi-hop reasoning over both sources of information?

Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs

no code yet • 11 Apr 2024

Integration of Large Language Models (LLMs) into visual domain tasks, resulting in visual-LLMs (V-LLMs), has enabled exceptional performance in vision-language tasks, particularly for visual question answering (VQA).

BRAVE: Broadening the visual encoding of vision-language models

no code yet • 10 Apr 2024

Our results highlight the potential of incorporating different visual biases for a more broad and contextualized visual understanding of VLMs.

HAMMR: HierArchical MultiModal React agents for generic VQA

no code yet • 8 Apr 2024

We start from a multimodal ReAct-based system and make it hierarchical by enabling our HAMMR agents to call upon other specialized agents.

Study of the effect of Sharpness on Blind Video Quality Assessment

no code yet • 6 Apr 2024

A comparative study of the various machine learning parameters such as SRCC and PLCC during the training and testing are presented along with the conclusion.