Semantic Bottleneck Scene Generation

26 Nov 2019  ·  Samaneh Azadi, Michael Tschannen, Eric Tzeng, Sylvain Gelly, Trevor Darrell, Mario Lucic ·

Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the flexibility of unconditional generative models, we propose a semantic bottleneck GAN model for unconditional synthesis of complex scenes. We assume pixel-wise segmentation labels are available during training and use them to learn the scene structure. During inference, our model first synthesizes a realistic segmentation layout from scratch, then synthesizes a realistic scene conditioned on that layout. For the former, we use an unconditional progressive segmentation generation network that captures the distribution of realistic semantic scene layouts. For the latter, we use a conditional segmentation-to-image synthesis network that captures the distribution of photo-realistic images conditioned on the semantic layout. When trained end-to-end, the resulting model outperforms state-of-the-art generative models in unsupervised image synthesis on two challenging domains in terms of the Frechet Inception Distance and user-study evaluations. Moreover, we demonstrate the generated segmentation maps can be used as additional training data to strongly improve recent segmentation-to-image synthesis networks.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Generation ADE-Indoor SB-GAN FID 85.27 # 3
Image-to-Image Translation ADE-Indoor Labels-to-Photo SB-GAN FID 48.15 # 1
Image Generation Cityscapes-25K 256x512 SB-GAN FID 62.97 # 1
Image Generation Cityscapes-5K 256x512 SB-GAN FID 65.49 # 1
Image-to-Image Translation Cityscapes Labels-to-Photo SB-GAN FID 60.39 # 11

Methods