Search Results for author: Chengxin Chen

Found 5 papers, 1 papers with code

TRNet: Two-level Refinement Network leveraging Speech Enhancement for Noise Robust Speech Emotion Recognition

no code implementations19 Apr 2024 Chengxin Chen, Pengyuan Zhang

One persistent challenge in Speech Emotion Recognition (SER) is the ubiquitous environmental noise, which frequently results in diminished SER performance in practical use.

Speech Emotion Recognition Speech Enhancement

Modality-Collaborative Transformer with Hybrid Feature Reconstruction for Robust Emotion Recognition

1 code implementation26 Dec 2023 Chengxin Chen, Pengyuan Zhang

As a vital aspect of affective computing, Multimodal Emotion Recognition has been an active research area in the multimedia community.

Multimodal Emotion Recognition

DSNet: Disentangled Siamese Network with Neutral Calibration for Speech Emotion Recognition

no code implementations25 Dec 2023 Chengxin Chen, Pengyuan Zhang

One persistent challenge in deep learning based speech emotion recognition (SER) is the unconscious encoding of emotion-irrelevant factors (e. g., speaker or phonetic variability), which limits the generalization of SER in practical use.

Disentanglement Speech Emotion Recognition

Audio-Visual Scene Classification Using A Transfer Learning Based Joint Optimization Strategy

no code implementations25 Apr 2022 Chengxin Chen, Meng Wang, Pengyuan Zhang

Recently, audio-visual scene classification (AVSC) has attracted increasing attention from multidisciplinary communities.

Scene Classification Transfer Learning

CTA-RNN: Channel and Temporal-wise Attention RNN Leveraging Pre-trained ASR Embeddings for Speech Emotion Recognition

no code implementations31 Mar 2022 Chengxin Chen, Pengyuan Zhang

To further exploit the embeddings from different layers of the ASR encoder, we propose a novel CTA-RNN architecture to capture the emotional salient parts of embeddings in both the channel and temporal directions.

Cross-corpus Speech Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.