Gradient penalty from a maximum margin perspective

15 Oct 2019  ·  Alexia Jolicoeur-Martineau, Ioannis Mitliagkas ·

A popular heuristic for improved performance in Generative adversarial networks (GANs) is to use some form of gradient penalty on the discriminator. This gradient penalty was originally motivated by a Wasserstein distance formulation. However, the use of gradient penalty in other GAN formulations is not well motivated. We present a unifying framework of expected margin maximization and show that a wide range of gradient-penalized GANs (e.g., Wasserstein, Standard, Least-Squares, and Hinge GANs) can be derived from this framework. Our results imply that employing gradient penalties induces a large-margin classifier (thus, a large-margin discriminator in GANs). We describe how expected margin maximization helps reduce vanishing gradients at fake (generated) samples, a known problem in GANs. From this framework, we derive a new $L^\infty$ gradient norm penalty with Hinge loss which generally produces equally good (or better) generated output in GANs than $L^2$-norm penalties (based on the Fr\'echet Inception Distance).

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Generation CIFAR-10 HingeGAN FID 27.12 # 131

Methods