MSG-BART: Multi-granularity Scene Graph-Enhanced Encoder-Decoder Language Model for Video-grounded Dialogue Generation

26 Sep 2023  ·  Hongcheng Liu, Zhe Chen, Hui Li, Pingjie Wang, Yanfeng Wang, Yu Wang ·

Generating dialogue grounded in videos requires a high level of understanding and reasoning about the visual scenes in the videos. However, existing large visual-language models are not effective due to their latent features and decoder-only structure, especially with respect to spatio-temporal relationship reasoning. In this paper, we propose a novel approach named MSG-BART, which enhances the integration of video information by incorporating a multi-granularity spatio-temporal scene graph into an encoder-decoder pre-trained language model. Specifically, we integrate the global and local scene graph into the encoder and decoder, respectively, to improve both overall perception and target reasoning capability. To further improve the information selection capability, we propose a multi-pointer network to facilitate selection between text and video. Extensive experiments are conducted on three video-grounded dialogue benchmarks, which show the significant superiority of the proposed MSG-BART compared to a range of state-of-the-art approaches.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here