SemanticStyleGAN presents a method where a generator is trained to model local semantic parts separately and synthesizes images in a compositional way. Experimental results demonstrate that Semantic StyleGAN model provides a strong disentanglement between different spatial areas. When combined with editing methods designed for StyleGANs, it can achieve a more fine-grained control to edit synthesized or real images. Key features: Style mixing between generated images Texture and structure are locally controlled Developed by researchers at ByteDance Inc with Yichun Shi, Xiao Yang, Yangyue Wan, and Xiaohui Shen

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods