Cross-Modality Gated Attention Fusion for Multimodal Sentiment Analysis

25 Aug 2022  ·  Ming Jiang, Shaoxiong Ji ·

Multimodal sentiment analysis is an important research task to predict the sentiment score based on the different modality data from a specific opinion video. Many previous pieces of research have proved the significance of utilizing the shared and unique information across different modalities. However, the high-order combined signals from multimodal data would also help extract satisfied representations. In this paper, we propose CMGA, a Cross-Modality Gated Attention fusion model for MSA that tends to make adequate interaction across different modality pairs. CMGA also adds a forget gate to filter the noisy and redundant signals introduced in the interaction procedure. We experiment on two benchmark datasets in MSA, MOSI, and MOSEI, illustrating the performance of CMGA over several baseline models. We also conduct the ablation study to demonstrate the function of different components inside CMGA.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here