Object Relational Graph with Teacher-Recommended Learning for Video Captioning

Taking full advantage of the information from both vision and language is critical for the video captioning task. Existing models lack adequate visual representation due to the neglect of interaction between object, and sufficient training for content-related words due to long-tailed problems. In this paper, we propose a complete video captioning system including both a novel model and an effective training strategy. Specifically, we propose an object relational graph (ORG) based encoder, which captures more detailed interaction features to enrich visual representation. Meanwhile, we design a teacher-recommended learning (TRL) method to make full use of the successful external language model (ELM) to integrate the abundant linguistic knowledge into the caption model. The ELM generates more semantically similar word proposals which extend the ground-truth words used for training to deal with the long-tailed problem. Experimental evaluations on three benchmarks: MSVD, MSR-VTT and VATEX show the proposed ORG-TRL system achieves state-of-the-art performance. Extensive ablation studies and visualizations illustrate the effectiveness of our system.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Results from the Paper


Ranked #9 on Video Captioning on VATEX (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Video Captioning MSVD ORG-TRL CIDEr 95.2 # 15
BLEU-4 54.3 # 12
METEOR 36.4 # 11
ROUGE-L 73.9 # 11
Video Captioning VATEX ORG-TRL BLEU-4 32.1 # 9
CIDEr 49.7 # 9
METEOR 22.2 # 6
ROUGE-L 48.9 # 7

Methods


No methods listed for this paper. Add relevant methods here