DV3 Attention Block is an attention-based module used in the Deep Voice 3 architecture. It uses a dot-product attention mechanism. A query vector (the hidden states of the decoder) and the per-timestep key vectors from the encoder are used to compute attention weights. This then outputs a context vector computed as the weighted average of the value vectors.
Source: Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence LearningPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Speech Synthesis | 4 | 36.36% |
Domain Adaptation | 2 | 18.18% |
Unsupervised Domain Adaptation | 2 | 18.18% |
Melody Extraction | 1 | 9.09% |
Retrieval | 1 | 9.09% |
Text-To-Speech Synthesis | 1 | 9.09% |
Component | Type |
|
---|---|---|
Dense Connections
|
Feedforward Networks | |
Dropout
|
Regularization | |
Scaled Dot-Product Attention
|
Attention Mechanisms | |
Softmax
|
Output Functions |