Differentiable Outlier Detection Enable Robust Deep Multimodal Analysis

11 Feb 2023  ·  Zhu Wang, Sourav Medya, Sathya N. Ravi ·

Often, deep network models are purely inductive during training and while performing inference on unseen data. Thus, when such models are used for predictions, it is well known that they often fail to capture the semantic information and implicit dependencies that exist among objects (or concepts) on a population level. Moreover, it is still unclear how domain or prior modal knowledge can be specified in a backpropagation friendly manner, especially in large-scale and noisy settings. In this work, we propose an end-to-end vision and language model incorporating explicit knowledge graphs. We also introduce an interactive out-of-distribution (OOD) layer using implicit network operator. The layer is used to filter noise that is brought by external knowledge base. In practice, we apply our model on several vision and language downstream tasks including visual question answering, visual reasoning, and image-text retrieval on different datasets. Our experiments show that it is possible to design models that perform similarly to state-of-art results but with significantly fewer samples and training time.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Reasoning NLVR2 Dev VK-OOD Accuracy 83.9 # 10
Visual Question Answering (VQA) OK-VQA VK-OOD Accuracy 52.4 # 15
Visual Question Answering VQA v2 test-dev VK-OOD Accuracy 76.8 # 7

Methods