Dissecting Deep Metric Learning Losses for Image-Text Retrieval

21 Oct 2022  ·  Hong Xuan, Xi Chen ·

Visual-Semantic Embedding (VSE) is a prevalent approach in image-text retrieval by learning a joint embedding space between the image and language modalities where semantic similarities would be preserved. The triplet loss with hard-negative mining has become the de-facto objective for most VSE methods. Inspired by recent progress in deep metric learning (DML) in the image domain which gives rise to new loss functions that outperform triplet loss, in this paper, we revisit the problem of finding better objectives for VSE in image-text matching. Despite some attempts in designing losses based on gradient movement, most DML losses are defined empirically in the embedding space. Instead of directly applying these loss functions which may lead to sub-optimal gradient updates in model parameters, in this paper we present a novel Gradient-based Objective AnaLysis framework, or \textit{GOAL}, to systematically analyze the combinations and reweighting of the gradients in existing DML functions. With the help of this analysis framework, we further propose a new family of objectives in the gradient space exploring different gradient combinations. In the event that the gradients are not integrable to a valid loss function, we implement our proposed objectives such that they would directly operate in the gradient space instead of on the losses in the embedding space. Comprehensive experiments have demonstrated that our novel objectives have consistently improved performance over baselines across different visual/text features and model frameworks. We also showed the generalizability of the GOAL framework by extending it to other models using triplet family losses including vision-language model with heavy cross-modal interactions and have achieved state-of-the-art results on the image-text retrieval tasks on COCO and Flick30K.

PDF Abstract

Results from the Paper


Ranked #8 on Cross-Modal Retrieval on Flickr30k (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Cross-Modal Retrieval COCO 2014 VSE-Gradient Image-to-text R@1 81.4 # 8
Image-to-text R@10 97.9 # 7
Image-to-text R@5 95.6 # 7
Text-to-image R@1 63.6 # 9
Text-to-image R@10 91.5 # 8
Text-to-image R@5 86.0 # 8
Cross-Modal Retrieval Flickr30k VSE-Gradient Image-to-text R@1 97.0 # 8
Image-to-text R@10 100 # 1
Image-to-text R@5 99.6 # 9
Text-to-image R@1 86.3 # 9
Text-to-image R@10 99.0 # 7
Text-to-image R@5 97.4 # 8

Methods