MDMMT: Multidomain Multimodal Transformer for Video Retrieval

19 Mar 2021  ·  Maksim Dzabraev, Maksim Kalashnikov, Stepan Komkov, Aleksandr Petiushko ·

We present a new state-of-the-art on the text to video retrieval task on MSRVTT and LSMDC benchmarks where our model outperforms all previous solutions by a large margin. Moreover, state-of-the-art results are achieved with a single model on two datasets without finetuning. This multidomain generalisation is achieved by a proper combination of different video caption datasets. We show that training on different datasets can improve test results of each other. Additionally we check intersection between many popular datasets and found that MSRVTT has a significant overlap between the test and the train parts, and the same situation is observed for ActivityNet.

PDF Abstract

Results from the Paper


Ranked #25 on Video Retrieval on LSMDC (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Video Retrieval LSMDC MDMMT text-to-video R@1 18.8 # 25
text-to-video R@5 38.5 # 21
text-to-video R@10 47.9 # 21
text-to-video Median Rank 12.3 # 11
text-to-video Mean Rank 58.0 # 9
Video Retrieval MSR-VTT MDMMT text-to-video R@1 23.1 # 30
text-to-video R@5 49.8 # 27
text-to-video R@10 61.8 # 26
text-to-video Mean Rank 52.8 # 5
text-to-video Median Rank 6 # 10
Video Retrieval MSR-VTT-1kA MDMMT text-to-video Mean Rank 16.5 # 20
text-to-video R@1 38.9 # 37
text-to-video R@5 69.0 # 35
text-to-video R@10 79.7 # 36
text-to-video Median Rank 2 # 10

Methods


No methods listed for this paper. Add relevant methods here