Joint training framework for text-to-speech and voice conversion using multi-source Tacotron and WaveNet

We investigated the training of a shared model for both text-to-speech (TTS) and voice conversion (VC) tasks. We propose using an extended model architecture of Tacotron, that is a multi-source sequence-to-sequence model with a dual attention mechanism as the shared model for both the TTS and VC tasks... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Mixture of Logistic Distributions
Output Functions
Griffin-Lim Algorithm
Phase Reconstruction
Sigmoid Activation
Activation Functions
Highway Layer
Miscellaneous Components
Residual Connection
Skip Connections
Convolution
Convolutions
Batch Normalization
Normalization
Max Pooling
Pooling Operations
Residual GRU
Recurrent Neural Networks
BiGRU
Bidirectional Recurrent Neural Networks
Highway Network
Feedforward Networks
CBHG
Speech Synthesis Blocks
ReLU
Activation Functions
Dropout
Regularization
Dense Connections
Feedforward Networks
Tanh Activation
Activation Functions
Additive Attention
Attention Mechanisms
GRU
Recurrent Neural Networks
Tacotron
Text-to-Speech Models
Dilated Causal Convolution
Temporal Convolutions
WaveNet
Generative Audio Models