Loss Functions

# WGAN-GP Loss

Introduced by Gulrajani et al. in Improved Training of Wasserstein GANs

Wasserstein Gradient Penalty Loss, or WGAN-GP Loss, is a loss used for generative adversarial networks that augments the Wasserstein loss with a gradient norm penalty for random samples $\mathbf{\hat{x}} \sim \mathbb{P}_{\hat{\mathbf{x}}}$ to achieve Lipschitz continuity:

$$L = \mathbb{E}_{\mathbf{\hat{x}} \sim \mathbb{P}_{g}}\left[D\left(\tilde{\mathbf{x}}\right)\right] - \mathbb{E}_{\mathbf{x} \sim \mathbb{P}_{r}}\left[D\left(\mathbf{x}\right)\right] + \lambda\mathbb{E}_{\mathbf{\hat{x}} \sim \mathbb{P}_{\hat{\mathbf{x}}}}\left[\left(||\nabla_{\tilde{\mathbf{x}}}D\left(\mathbf{\tilde{x}}\right)||_{2}-1\right)^{2}\right]$$

It was introduced as part of the WGAN-GP overall model.

#### Papers

Paper Code Results Date Stars