MontageGAN: Generation and Assembly of Multiple Components by GANs

31 May 2022  ·  Chean Fei Shee, Seiichi Uchida ·

A multi-layer image is more valuable than a single-layer image from a graphic designer's perspective. However, most of the proposed image generation methods so far focus on single-layer images. In this paper, we propose MontageGAN, which is a Generative Adversarial Networks (GAN) framework for generating multi-layer images. Our method utilized a two-step approach consisting of local GANs and global GAN. Each local GAN learns to generate a specific image layer, and the global GAN learns the placement of each generated image layer. Through our experiments, we show the ability of our method to generate multi-layer images and estimate the placement of the generated image layers.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here