Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation

CVPR 2020  ·  Hao Tang, Dan Xu, Yan Yan, Philip H. S. Torr, Nicu Sebe ·

In this paper, we address the task of semantic-guided scene generation. One open challenge in scene generation is the difficulty of the generation of small objects and detailed local texture, which has been widely observed in global image-level generation methods. To tackle this issue, in this work we consider learning the scene generation in a local context, and correspondingly design a local class-specific generative network with semantic maps as a guidance, which separately constructs and learns sub-generators concentrating on the generation of different classes, and is able to provide more scene details. To learn more discriminative class-specific feature representations for the local generation, a novel classification module is also proposed. To combine the advantage of both the global image-level and the local class-specific generation, a joint generation network is designed with an attention fusion module and a dual-discriminator structure embedded. Extensive experiments on two scene image generation tasks show superior generation performance of the proposed model. The state-of-the-art results are established by large margins on both tasks and on challenging public benchmarks. The source code and trained models are available at https://github.com/Ha0Tang/LGGAN.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Cross-View Image-to-Image Translation cvusa LGGAN SSIM 0.5238 # 3
KL 2.55 # 2
PSNR 22.5766 # 2
SD 19.744 # 2
Cross-View Image-to-Image Translation Dayton (256×256) - aerial-to-ground LGGAN SSIM 0.5457 # 2
KL 2.18 # 2
PSNR 22.9949 # 1
SD 19.6145 # 1

Methods


No methods listed for this paper. Add relevant methods here