Search Results for author: Giedrius T. Burachas

Found 3 papers, 0 papers with code

Learning Invariant World State Representations with Predictive Coding

no code implementations6 Jul 2022 Avi Ziskind, Sujeong Kim, Giedrius T. Burachas

Herein, we propose a framework for evaluating visual representations for illumination invariance in the context of depth perception.

Decoder Self-Supervised Learning

Improving Users' Mental Model with Attention-directed Counterfactual Edits

no code implementations13 Oct 2021 Kamran Alipour, Arijit Ray, Xiao Lin, Michael Cogswell, Jurgen P. Schulze, Yi Yao, Giedrius T. Burachas

In the domain of Visual Question Answering (VQA), studies have shown improvement in users' mental model of the VQA system when they are exposed to examples of how these systems answer certain Image-Question (IQ) pairs.

counterfactual Question Answering +2

The Impact of Explanations on AI Competency Prediction in VQA

no code implementations2 Jul 2020 Kamran Alipour, Arijit Ray, Xiao Lin, Jurgen P. Schulze, Yi Yao, Giedrius T. Burachas

In this paper, we evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA).

Language Modelling Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.