Semi-Supervised Training for Improving Data Efficiency in End-to-End Speech Synthesis

30 Aug 2018 Yu-An Chung Yuxuan Wang Wei-Ning Hsu Yu Zhang RJ Skerry-Ryan

Although end-to-end text-to-speech (TTS) models such as Tacotron have shown excellent results, they typically require a sizable set of high-quality <text, audio> pairs for training, which are expensive to collect. In this paper, we propose a semi-supervised training framework to improve the data efficiency of Tacotron... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Griffin-Lim Algorithm
Phase Reconstruction
Sigmoid Activation
Activation Functions
Highway Layer
Miscellaneous Components
Residual Connection
Skip Connections
Convolution
Convolutions
Batch Normalization
Normalization
Max Pooling
Pooling Operations
Residual GRU
Recurrent Neural Networks
BiGRU
Bidirectional Recurrent Neural Networks
Highway Network
Feedforward Networks
CBHG
Speech Synthesis Blocks
ReLU
Activation Functions
Dropout
Regularization
Dense Connections
Feedforward Networks
Tanh Activation
Activation Functions
Additive Attention
Attention Mechanisms
GRU
Recurrent Neural Networks
Tacotron
Text-to-Speech Models