Search Results for author: Zhaoxi Mu

Found 5 papers, 0 papers with code

Separate in the Speech Chain: Cross-Modal Conditional Audio-Visual Target Speech Extraction

no code implementations19 Apr 2024 Zhaoxi Mu, Xinyu Yang

In audio-visual target speech extraction tasks, the audio modality tends to dominate, potentially overshadowing the importance of visual guidance.

Speech Extraction

Self-Supervised Disentangled Representation Learning for Robust Target Speech Extraction

no code implementations16 Dec 2023 Zhaoxi Mu, Xinyu Yang, Sining Sun, Qing Yang

However, in the task of target speech extraction, certain elements of global and local semantic information in the reference speech, which are irrelevant to speaker identity, can lead to speaker confusion within the speech extraction network.

Disentanglement Speech Extraction

Multi-Dimensional and Multi-Scale Modeling for Speech Separation Optimized by Discriminative Learning

no code implementations7 Mar 2023 Zhaoxi Mu, Xinyu Yang, Wenjing Zhu

Specifically, we design a new network SE-Conformer that can model audio sequences in multiple dimensions and scales, and apply it to the dual-path speech separation framework.

Speech Separation

A Multi-Stage Triple-Path Method for Speech Separation in Noisy and Reverberant Environments

no code implementations7 Mar 2023 Zhaoxi Mu, Xinyu Yang, Xiangyuan Yang, Wenjing Zhu

In noisy and reverberant environments, the performance of deep learning-based speech separation methods drops dramatically because previous methods are not designed and optimized for such situations.

Denoising Speech Denoising +1

Review of end-to-end speech synthesis technology based on deep learning

no code implementations20 Apr 2021 Zhaoxi Mu, Xinyu Yang, Yizhuo Dong

As an indispensable part of modern human-computer interaction system, speech synthesis technology helps users get the output of intelligent machine more easily and intuitively, thus has attracted more and more attention.

Speech Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.