VizWiz Answer Grounding

Introduced by Chen et al. in Grounding Answers for Visual Questions Asked by Visually Impaired People

Visual Question Answering (VQA) is the task of returning the answer to a question about an image. While most VQA services only return a natural language answer, we believe it is also valuable for a VQA service to return the region in the image used to arrive at the answer. We call this task of locating the relevant visual evidence answer grounding. We publicly share the VizWiz-VQA-Grounding dataset, the first dataset that visually grounds answers to visual questions asked by people with visual impairments, to encourage community progress in developing algorithmic frameworks..

Numerous applications would be possible if answer groundings were provided in response to visual questions. First, they enable assessment of whether a VQA model reasons based on the correct visual evidence. This is valuable as an explanation as well as to support developers in debugging models. Second, answer groundings enable segmenting the relevant content from the background. This is a valuable precursor for obfuscating the background to preserve privacy, given that photographers can inadvertently capture private information in the background of their images. Third, users could more quickly find the desired information if a service instead magnified the relevant visual evidence. This is valuable in part because answers from VQA services can be insufficient, including because humans suffer from “reporting bias” meaning they describe what they find interesting without understanding what a person/population is seeking.

Source: paper

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


License


  • Unknown

Modalities


Languages