## Projection Discriminator

Introduced by Miyato et al. in cGANs with Projection Discriminator

A Projection Discriminator is a type of discriminator for generative adversarial networks. It is motivated by a probabilistic model in which the distribution of the conditional variable $\textbf{y}$ given $\textbf{x}$ is discrete or uni-modal continuous distributions.

If we look at the original solution for the loss function $\mathcal{L}_{D}$ in the vanilla GANs, we can decompose it into the sum of two log-likelihood ratios:

$$f^{*}\left(\mathbf{x}, \mathbf{y}\right) = \log\frac{q\left(\mathbf{x}\mid{\mathbf{y}}\right)q\left(\mathbf{y}\right)}{p\left(\mathbf{x}\mid{\mathbf{y}}\right)p\left(\mathbf{y}\right)} = \log\frac{q\left(\mathbf{y}\mid{\mathbf{x}}\right)}{p\left(\mathbf{y}\mid{\mathbf{x}}\right)} + \log\frac{q\left(\mathbf{x}\right)}{p\left(\mathbf{x}\right)} = r\left(\mathbf{y\mid{x}}\right) + r\left(\mathbf{x}\right)$$

We can model the log likelihood ratio $r\left(\mathbf{y\mid{x}}\right)$ and $r\left(\mathbf{x}\right)$ by some parametric functions $f_{1}$ and $f_{2}$ respectively. If we make a standing assumption that $p\left(y\mid{x}\right)$ and $q\left(y\mid{x}\right)$ are simple distributions like those that are Gaussian or discrete log linear on the feature space, then the parametrization of the following form becomes natural:

$$f\left(\mathbf{x}, \mathbf{y}; \theta\right) = f_{1}\left(\mathbf{x}, \mathbf{y}; \theta\right) + f_{2}\left(\mathbf{x}; \theta\right) = \mathbf{y}^{T}V\phi\left(\mathbf{x}; \theta_{\phi}\right) + \psi\left(\phi(\mathbf{x}; \theta_{\phi}); \theta_{\psi}\right)$$

where $V$ is the embedding matrix of $y$, $\phi\left(·, \theta_{\phi}\right)$ is a vector output function of $x$, and $\psi\left(·, \theta_{\psi}\right)$ is a scalar function of the same $\phi\left(\mathbf{x}; \theta_{\phi}\right)$ that appears in $f_{1}$. The learned parameters $\theta =${$V, \theta_{\phi}, \theta_{\psi}$} are trained to optimize the adversarial loss. This model of the discriminator is the projection.

#### Latest Papers

PAPER DATE
not-so-BigGAN: Generating High-Fidelity Images on a Small Compute Budget
Seungwook HanAkash SrivastavaCole HurwitzPrasanna SattigeriDavid D. Cox
2020-09-09
Neural Crossbreed: Neural Based Image Metamorphosis
Sanghun ParkKwanggyoon SeoJunyong Noh
2020-09-02
Multimodal Image-to-Image Translation via a Single Generative Adversarial Network
Shihua HuangCheng HeRan Cheng
2020-08-04
Instance Selection for GANs
Terrance DeVriesMichal DrozdzalGraham W. Taylor
2020-07-30
Interpolating GANs to Scaffold Autotelic Creativity
Ziv EpsteinOcéane BoulaisSkylar GordonMatt Groh
2020-07-21
Differentiable Augmentation for Data-Efficient GAN Training
| Shengyu ZhaoZhijian LiuJi LinJun-Yan ZhuSong Han
2020-06-18
Training Generative Adversarial Networks with Limited Data
| Tero KarrasMiika AittalaJanne HellstenSamuli LaineJaakko LehtinenTimo Aila
2020-06-11
Learning disconnected manifolds: a no GANs land
Ugo TanielianThibaut IssenhuthElvis DohmatobJeremie Mary
2020-06-08
Big GANs Are Watching You: Towards Unsupervised Object Segmentation with Off-the-Shelf Generative Models
| Andrey VoynovStanislav MorozovArtem Babenko
2020-06-08
A U-Net Based Discriminator for Generative Adversarial Networks
Edgar Schonfeld Bernt Schiele Anna Khoreva
2020-06-01
Network Fusion for Content Creation with Conditional INNs
Robin RombachPatrick EsserBjörn Ommer
2020-05-27
Mimicry: Towards the Reproducibility of GAN Research
| Kwot Sin LeeChristopher Town
2020-05-05
GANSpace: Discovering Interpretable GAN Controls
| Erik HärkönenAaron HertzmannJaakko LehtinenSylvain Paris
2020-04-06
Evolving Normalization-Activation Layers
| Hanxiao LiuAndrew BrockKaren SimonyanQuoc V. Le
2020-04-06
Feature Quantization Improves GAN Training
| Yang ZhaoChunyuan LiPing YuJianfeng GaoChangyou Chen
2020-04-05
BigGAN-based Bayesian reconstruction of natural images from human brain activity
Kai QiaoJian ChenLinyuan WangChi ZhangLi TongBin Yan
2020-03-13
A U-Net Based Discriminator for Generative Adversarial Networks
| Edgar SchönfeldBernt SchieleAnna Khoreva
2020-02-28
Improved Consistency Regularization for GANs
Zhengli ZhaoSameer SinghHonglak LeeZizhao ZhangAugustus OdenaHan Zhang
2020-02-11
Reconstructing Natural Scenes from fMRI Patterns using BigBiGAN
2020-01-31
Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures
Mohamed El Amine SeddikCosme LouartMohamed TamaazoustiRomain Couillet
2020-01-21
CNN-generated images are surprisingly easy to spot... for now
| Sheng-Yu WangOliver WangRichard ZhangAndrew OwensAlexei A. Efros
2019-12-23
Detecting GAN generated errors
Xiru ZhuFengdi CheTianzi YangTzuyang YuDavid MegerGregory Dudek
2019-12-02
LOGAN: Latent Optimisation for Generative Adversarial Networks
| Yan WuJeff DonahueDavid BalduzziKaren SimonyanTimothy Lillicrap
2019-12-02
Semantic Hierarchy Emerges in Deep Generative Representations for Scene Synthesis
| Ceyuan YangYujun ShenBolei Zhou
2019-11-21
Improving sample diversity of a pre-trained, class-conditional GAN by changing its class embeddings
| Qi LiLong MaiMichael A. AlcornAnh Nguyen
2019-10-10
Adversarial Video Generation on Complex Datasets
Aidan ClarkJeff DonahueKaren Simonyan
2019-07-15
| Jeff DonahueKaren Simonyan
2019-07-04
Improved Precision and Recall Metric for Assessing Generative Models
| Tuomas KynkäänniemiTero KarrasSamuli LaineJaakko LehtinenTimo Aila
2019-04-15
High-Fidelity Image Generation With Fewer Labels
| Mario LucicMichael TschannenMarvin RitterXiaohua ZhaiOlivier BachemSylvain Gelly
2019-03-06
Metropolis-Hastings view on variational inference and adversarial training
Kirill NeklyudovEvgenii EgorovPavel ShvechikovDmitry Vetrov
2018-10-16
Large Scale GAN Training for High Fidelity Natural Image Synthesis
| Andrew BrockJeff DonahueKaren Simonyan
2018-09-28
Towards Audio to Scene Image Synthesis using Generative Adversarial Network
Chia-Hung WanShun-Po ChuangHung-Yi Lee
2018-08-13
cGANs with Projection Discriminator
| Takeru MiyatoMasanori Koyama
2018-02-15

#### Components

COMPONENT TYPE
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign