Paper

JGAN: A Joint Formulation of GAN for Synthesizing Images and Labels

Image generation with explicit condition or label generally works better than unconditional methods. In modern GAN frameworks, both generator and discriminator are formulated to model the conditional distribution of images given with labels. In this paper, we provide an alternative formulation of GAN which models the joint distribution of images and labels. There are two advantages in this joint formulation over conditional approaches. The first advantage is that the joint formulation is more robust to label noises if it's properly modeled. This alleviates the burden of making noise-free labels and allows the use of weakly-supervised labels in image generation. The second is that we can use any kinds of weak labels or image features that have correlations with the original image data to enhance unconditional image generation. We will show the effectiveness of our joint formulation on CIFAR10, CIFAR100, and STL dataset with the state-of-the-art GAN architecture.

Results in Papers With Code
(↓ scroll down to see all results)