Knowledge Guided Entity-aware Video Captioning and A Basketball Benchmark

25 Jan 2024  ·  Zeyu Xi, Ge Shi, Xuefen Li, Junchi Yan, Zun Li, Lifang Wu, Zilin Liu, Liang Wang ·

Despite the recent emergence of video captioning models, how to generate the text description with specific entity names and fine-grained actions is far from being solved, which however has great applications such as basketball live text broadcast. In this paper, a new multimodal knowledge graph supported basketball benchmark for video captioning is proposed. Specifically, we construct a multimodal basketball game knowledge graph (KG_NBA_2022) to provide additional knowledge beyond videos. Then, a multimodal basketball game video captioning (VC_NBA_2022) dataset that contains 9 types of fine-grained shooting events and 286 players' knowledge (i.e., images and names) is constructed based on KG_NBA_2022. We develop a knowledge guided entity-aware video captioning network (KEANet) based on a candidate player list in encoder-decoder form for basketball live text broadcast. The temporal contextual information in video is encoded by introducing the bi-directional GRU (Bi-GRU) module. And the entity-aware module is designed to model the relationships among the players and highlight the key players. Extensive experiments on multiple sports benchmarks demonstrate that KEANet effectively leverages extera knowledge and outperforms advanced video captioning models. The proposed dataset and corresponding codes will be publicly available soon

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods