A unified sequence-to-sequence front-end model for Mandarin text-to-speech synthesis

11 Nov 2019 Junjie Pan Xiang Yin Zhiling Zhang Shichao Liu Yang Zhang Zejun Ma Yuxuan Wang

In Mandarin text-to-speech (TTS) system, the front-end text processing module significantly influences the intelligibility and naturalness of synthesized speech. Building a typical pipeline-based front-end which consists of multiple individual components requires extensive efforts... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Softmax
Output Functions
WaveRNN
Generative Audio Models
Griffin-Lim Algorithm
Phase Reconstruction
Sigmoid Activation
Activation Functions
Highway Layer
Miscellaneous Components
Residual Connection
Skip Connections
Convolution
Convolutions
Batch Normalization
Normalization
Max Pooling
Pooling Operations
Residual GRU
Recurrent Neural Networks
BiGRU
Bidirectional Recurrent Neural Networks
Highway Network
Feedforward Networks
CBHG
Speech Synthesis Blocks
ReLU
Activation Functions
Dropout
Regularization
Dense Connections
Feedforward Networks
Tanh Activation
Activation Functions
Additive Attention
Attention Mechanisms
GRU
Recurrent Neural Networks
Tacotron
Text-to-Speech Models