Learning Layout and Style Reconfigurable GANs for Controllable Image Synthesis

25 Mar 2020  ·  Wei Sun, Tianfu Wu ·

With the remarkable recent progress on learning deep generative models, it becomes increasingly interesting to develop models for controllable image synthesis from reconfigurable inputs. This paper focuses on a recent emerged task, layout-to-image, to learn generative models that are capable of synthesizing photo-realistic images from spatial layout (i.e., object bounding boxes configured in an image lattice) and style (i.e., structural and appearance variations encoded by latent vectors). This paper first proposes an intuitive paradigm for the task, layout-to-mask-to-image, to learn to unfold object masks of given bounding boxes in an input layout to bridge the gap between the input layout and synthesized images. Then, this paper presents a method built on Generative Adversarial Networks for the proposed layout-to-mask-to-image with style control at both image and mask levels. Object masks are learned from the input layout and iteratively refined along stages in the generator network. Style control at the image level is the same as in vanilla GANs, while style control at the object mask level is realized by a proposed novel feature normalization scheme, Instance-Sensitive and Layout-Aware Normalization. In experiments, the proposed method is tested in the COCO-Stuff dataset and the Visual Genome dataset with state-of-the-art performance obtained.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Layout-to-Image Generation COCO-Stuff 128x128 LostGAN-V2 FID 24.76 # 2
Inception Score 14.21 # 3
Layout-to-Image Generation Visual Genome 128x128 LostGAN-V2 FID 29.00 # 4
Inception Score 10.71 # 3

Methods


No methods listed for this paper. Add relevant methods here