Multimodal Sentiment Analysis
73 papers with code • 5 benchmarks • 7 datasets
Multimodal sentiment analysis is the task of performing sentiment analysis with multiple data sources - e.g. a camera feed of someone's face and their recorded speech.
( Image credit: ICON: Interactive Conversational Memory Network for Multimodal Emotion Detection )
Libraries
Use these libraries to find Multimodal Sentiment Analysis models and implementationsDatasets
Most implemented papers
Multimodal Sentiment Analysis To Explore the Structure of Emotions
We propose a novel approach to multimodal sentiment analysis using deep neural networks combining visual analysis and natural language processing.
Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities
Our method is based on the key insight that translation from a source to a target modality provides a method of learning joint representations using only the source modality as input.
MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis
In this paper, we aim to learn effective modality representations to aid the process of fusion.
Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis
On MOSI and MOSEI datasets, our method surpasses the current state-of-the-art methods.
Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis
Multimodal sentiment analysis aims to extract and integrate semantic information collected from multiple modalities to recognize the expressed emotions and sentiment in multimodal data.
Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis
In this work, we propose a framework named MultiModal InfoMax (MMIM), which hierarchically maximizes the Mutual Information (MI) in unimodal input pairs (inter-modality) and between multimodal fusion result and unimodal input in order to maintain task-related information through multimodal fusion.
UniSA: Unified Generative Framework for Sentiment Analysis
Sentiment analysis is a crucial task that aims to understand people's emotional states and predict emotional categories based on multimodal information.
Select-Additive Learning: Improving Generalization in Multimodal Sentiment Analysis
In this paper, we propose a Select-Additive Learning (SAL) procedure that improves the generalizability of trained neural networks for multimodal sentiment analysis.
Multimodal Sentiment Analysis using Hierarchical Fusion with Context Modeling
Multimodal sentiment analysis is a very actively growing field of research.
Multimodal Language Analysis with Recurrent Multistage Fusion
In this paper, we propose the Recurrent Multistage Fusion Network (RMFN) which decomposes the fusion problem into multiple stages, each of them focused on a subset of multimodal signals for specialized, effective fusion.