no code implementations • 6 Jul 2022 • Avi Ziskind, Sujeong Kim, Giedrius T. Burachas
Herein, we propose a framework for evaluating visual representations for illumination invariance in the context of depth perception.
no code implementations • 13 Oct 2021 • Kamran Alipour, Arijit Ray, Xiao Lin, Michael Cogswell, Jurgen P. Schulze, Yi Yao, Giedrius T. Burachas
In the domain of Visual Question Answering (VQA), studies have shown improvement in users' mental model of the VQA system when they are exposed to examples of how these systems answer certain Image-Question (IQ) pairs.
no code implementations • 2 Jul 2020 • Kamran Alipour, Arijit Ray, Xiao Lin, Jurgen P. Schulze, Yi Yao, Giedrius T. Burachas
In this paper, we evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA).