Search Results for author: Yuntao Shou

Found 10 papers, 0 papers with code

Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum

no code implementations27 Apr 2024 Tao Meng, FuChen Zhang, Yuntao Shou, Wei Ai, Nan Yin, Keqin Li

Since consistency and complementarity information correspond to low-frequency and high-frequency information, respectively, this paper revisits the problem of multimodal emotion recognition in conversation from the perspective of the graph spectrum.

Contrastive Learning Emotion Recognition in Conversation +1

Revisiting Multi-modal Emotion Learning with Broad State Space Models and Probability-guidance Fusion

no code implementations27 Apr 2024 Yuntao Shou, Tao Meng, FuChen Zhang, Nan Yin, Keqin Li

Specifically, on the one hand, in the feature disentanglement stage, we propose a Broad Mamba, which does not rely on a self-attention mechanism for sequence modeling, but uses state space models to compress emotional representation, and utilizes broad learning systems to explore the potential data distribution in broad space.

Disentanglement Emotion Classification +2

A Two-Stage Multimodal Emotion Recognition Model Based on Graph Contrastive Learning

no code implementations3 Jan 2024 Wei Ai, FuChen Zhang, Tao Meng, Yuntao Shou, HongEn Shao, Keqin Li

To address the above issues, we propose a two-stage emotion recognition model based on graph contrastive learning (TS-GCL).

Classification Contrastive Learning +2

Adversarial Representation with Intra-Modal and Inter-Modal Graph Contrastive Learning for Multimodal Emotion Recognition

no code implementations28 Dec 2023 Yuntao Shou, Tao Meng, Wei Ai, Keqin Li

However, the existing feature fusion methods have usually mapped the features of different modalities into the same feature space for information fusion, which can not eliminate the heterogeneity between different modalities.

Contrastive Learning Graph Representation Learning +1

DER-GCN: Dialogue and Event Relation-Aware Graph Convolutional Neural Network for Multimodal Dialogue Emotion Recognition

no code implementations17 Dec 2023 Wei Ai, Yuntao Shou, Tao Meng, Keqin Li

Specifically, we construct a weighted multi-relationship graph to simultaneously capture the dependencies between speakers and event relations in a dialogue.

Contrastive Learning Multimodal Emotion Recognition +1

Deep Imbalanced Learning for Multimodal Emotion Recognition in Conversations

no code implementations11 Dec 2023 Tao Meng, Yuntao Shou, Wei Ai, Nan Yin, Keqin Li

The main task of Multimodal Emotion Recognition in Conversations (MERC) is to identify the emotions in modalities, e. g., text, audio, image and video, which is a significant development direction for realizing machine intelligence.

Data Augmentation Generative Adversarial Network +2

A Comprehensive Survey on Multi-modal Conversational Emotion Recognition with Deep Learning

no code implementations10 Dec 2023 Yuntao Shou, Tao Meng, Wei Ai, Nan Yin, Keqin Li

Unlike the traditional single-utterance multi-modal emotion recognition or single-modal conversation emotion recognition, MCER is a more challenging problem that needs to deal with more complex emotional interaction relationships.

Emotion Recognition

Graph Information Bottleneck for Remote Sensing Segmentation

no code implementations5 Dec 2023 Yuntao Shou, Wei Ai, Tao Meng

Furthermore, this paper innovatively introduces information bottleneck theory into graph contrastive learning to maximize task-related information while minimizing task-independent redundant information.

Change Detection Contrastive Learning +3

CZL-CIAE: CLIP-driven Zero-shot Learning for Correcting Inverse Age Estimation

no code implementations4 Dec 2023 Yuntao Shou, Wei Ai, Tao Meng, Keqin Li

Zero-shot age estimation aims to learn feature information about age from input images and make inferences about a given person's image or video frame without specific sample data.

Age Estimation Zero-Shot Learning

A Low-rank Matching Attention based Cross-modal Feature Fusion Method for Conversational Emotion Recognition

no code implementations16 Jun 2023 Yuntao Shou, Xiangyong Cao, Deyu Meng, Bo Dong, Qinghua Zheng

By setting a matching weight and calculating attention scores between modal features row by row, LMAM contains fewer parameters than the self-attention method.

Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.