RoCOCO: Robustness Benchmark of MS-COCO to Stress-test Image-Text Matching Models

21 Apr 2023  ·  Seulki Park, Daeho Um, Hajung Yoon, Sanghyuk Chun, Sangdoo Yun, Jin Young Choi ·

In this paper, we propose a robustness benchmark for image-text matching models to assess their vulnerabilities. To this end, we insert adversarial texts and images into the search pool (i.e., gallery set) and evaluate models with the adversarial data. Specifically, we replace a word in the text to change the meaning of the text and mix images with different images to create perceptible changes in pixels. We assume that such explicit alterations would not deceive a robust model, as they should understand the holistic meaning of texts and images simultaneously. However, in our evaluations on the proposed benchmark, many state-of-the-art models show significant performance degradation, e.g., Recall@1: 81.9% $\rightarrow$ 64.5% in BLIP, 66.1% $\rightarrow$ 37.5% in VSE$\infty$, where the models favor adversarial texts/images over the original ones. This reveals the current vision-language models may not account for subtle changes or understand the overall context of texts and images. Our findings can provide insights for improving the robustness of the vision-language models and devising more diverse stress-test methods in cross-modal retrieval task. Source code and dataset will be available at https://github.com/pseulki/rococo.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods