Regularization

R1 Regularization

Introduced by Mescheder et al. in Which Training Methods for GANs do actually Converge?

R_INLINE_MATH_1 Regularization is a regularization technique and gradient penalty for training generative adversarial networks. It penalizes the discriminator from deviating from the Nash Equilibrium via penalizing the gradient on real data alone: when the generator distribution produces the true data distribution and the discriminator is equal to 0 on the data manifold, the gradient penalty ensures that the discriminator cannot create a non-zero gradient orthogonal to the data manifold without suffering a loss in the GAN game.

This leads to the following regularization term:

$$ R_{1}\left(\psi\right) = \frac{\gamma}{2}E_{p_{D}\left(x\right)}\left[||\nabla{D_{\psi}\left(x\right)}||^{2}\right] $$

Source: Which Training Methods for GANs do actually Converge?

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Generation 114 17.01%
Disentanglement 43 6.42%
Image Manipulation 32 4.78%
Face Generation 29 4.33%
Face Recognition 22 3.28%
Image-to-Image Translation 18 2.69%
Face Swapping 17 2.54%
Super-Resolution 15 2.24%
Translation 14 2.09%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories