TransModality: An End2End Fusion Method with Transformer for Multimodal Sentiment Analysis

7 Sep 2020  ·  Zilong Wang, Zhaohong Wan, Xiaojun Wan ·

Multimodal sentiment analysis is an important research area that predicts speaker's sentiment tendency through features extracted from textual, visual and acoustic modalities. The central challenge is the fusion method of the multimodal information. A variety of fusion methods have been proposed, but few of them adopt end-to-end translation models to mine the subtle correlation between modalities. Enlightened by recent success of Transformer in the area of machine translation, we propose a new fusion method, TransModality, to address the task of multimodal sentiment analysis. We assume that translation between modalities contributes to a better joint representation of speaker's utterance. With Transformer, the learned features embody the information both from the source modality and the target modality. We validate our model on multiple multimodal datasets: CMU-MOSI, MELD, IEMOCAP. The experiments show that our proposed method achieves the state-of-the-art performance.

PDF Abstract

Results from the Paper


 Ranked #1 on Multimodal Sentiment Analysis on CMU-MOSI (F1-score (Weighted) metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Multimodal Sentiment Analysis CMU-MOSI Tri-TransModality F1-score (Weighted) 82.71 # 1

Methods