VizWiz (VizWiz-VQA)

Introduced by Gurari et al. in VizWiz Grand Challenge: Answering Visual Questions from Blind People

The VizWiz-VQA dataset originates from a natural visual question answering setting where blind people each took an image and recorded a spoken question about it, together with 10 crowdsourced answers per visual question. The proposed challenge addresses the following two tasks for this dataset: predict the answer to a visual question and (2) predict whether a visual question cannot be answered.

Source: https://vizwiz.org/tasks-and-datasets/vqa/

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


License


Modalities


Languages