DeVLBert: Learning Deconfounded Visio-Linguistic Representations

16 Aug 2020 Shengyu Zhang Tan Jiang Tan Wang Kun Kuang Zhou Zhao Jianke Zhu Jin Yu Hongxia Yang Fei Wu

In this paper, we propose to investigate the problem of out-of-domain visio-linguistic pretraining, where the pretraining data distribution differs from that of downstream data on which the pretrained model will be fine-tuned. Existing methods for this problem are purely likelihood-based, leading to the spurious correlations and hurt the generalization ability when transferred to out-of-domain downstream tasks... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper