Search Results for author: Geng Tu

Found 2 papers, 1 papers with code

SDIF-DA: A Shallow-to-Deep Interaction Framework with Data Augmentation for Multi-modal Intent Detection

1 code implementation31 Dec 2023 Shijue Huang, Libo Qin, Bingbing Wang, Geng Tu, Ruifeng Xu

The two core challenges for multi-modal intent detection are (1) how to effectively align and fuse different features of modalities and (2) the limited labeled multi-modal intent training data.

Data Augmentation Intent Detection +2

CSAT‑FTCN: A Fuzzy‑Oriented Model with Contextual Self‑attention Network for Multimodal Emotion Recognition

no code implementations Cognitive Computation 2023 Dazhi Jiang, Hao liu, Runguo Wei, Geng Tu

Moreover, the CSAT-FTCN can obtain the dependency relationships of target utterances on internal own key information and external contextual information to understand emotions in a more profound sense.

Multimodal Emotion Recognition Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.