Multi-Modality Multi-Loss Fusion Network

1 Aug 2023  ·  Zehui Wu, Ziwei Gong, Jaywon Koo, Julia Hirschberg ·

In this work we investigate the optimal selection and fusion of features across multiple modalities and combine these in a neural network to improve emotion detection. We compare different fusion methods and examine the impact of multi-loss training within the multi-modality fusion network, identifying useful findings relating to subnet performance. Our best model achieves state-of-the-art performance for three datasets (CMU-MOSI, CMU-MOSEI and CH-SIMS), and outperforms the other methods in most metrics. We have found that training on multimodal features improves single modality testing and designing fusion methods based on dataset annotation schema enhances model performance. These results suggest a roadmap towards an optimized feature selection and fusion approach for enhancing emotion detection in neural networks.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multimodal Sentiment Analysis CH-SIMS MMML F1 82.9 # 1
MAE 0.332 # 1
CORR 73.26 # 2
Multimodal Sentiment Analysis CMU-MOSEI MMML Accuracy 86.73 # 4
MAE 0.517 # 1
F1 86.49 # 3

Methods