Wasserstein Gradient Penalty Loss, or WGANGP Loss, is a loss used for generative adversarial networks that augments the Wasserstein loss with a gradient norm penalty for random samples $\mathbf{\hat{x}} \sim \mathbb{P}_{\hat{\mathbf{x}}}$ to achieve Lipschitz continuity:
$$ L = \mathbb{E}_{\mathbf{\hat{x}} \sim \mathbb{P}_{g}}\left[D\left(\tilde{\mathbf{x}}\right)\right]  \mathbb{E}_{\mathbf{x} \sim \mathbb{P}_{r}}\left[D\left(\mathbf{x}\right)\right] + \lambda\mathbb{E}_{\mathbf{\hat{x}} \sim \mathbb{P}_{\hat{\mathbf{x}}}}\left[\left(\nabla_{\tilde{\mathbf{x}}}D\left(\mathbf{\tilde{x}}\right)_{2}1\right)^{2}\right]$$
It was introduced as part of the WGANGP overall model.
Source: Improved Training of Wasserstein GANsPaper  Code  Results  Date  Stars 

Task  Papers  Share 

Image Generation  9  21.95% 
Speech Synthesis  3  7.32% 
Voice Conversion  2  4.88% 
Conditional Image Generation  2  4.88% 
Synthetic Data Generation  2  4.88% 
Time Series  1  2.44% 
Speech Quality  1  2.44% 
Speech Recognition  1  2.44% 
whole slide images  1  2.44% 
Component  Type 


🤖 No Components Found  You can add them if they exist; e.g. Mask RCNN uses RoIAlign 