Semantic StyleGAN
SemanticStyleGAN presents a method where a generator is trained to model local semantic parts separately and synthesizes images in a compositional way. Experimental results demonstrate that Semantic StyleGAN model provides a strong disentanglement between different spatial areas. When combined with editing methods designed for StyleGANs, it can achieve a more fine-grained control to edit synthesized or real images. Key features: Style mixing between generated images Texture and structure are locally controlled Developed by researchers at ByteDance Inc with Yichun Shi, Xiao Yang, Yangyue Wan, and Xiaohui Shen
PDF Abstract