Image Generators with Conditionally-Independent Pixel Synthesis

Existing image generator networks rely heavily on spatial convolutions and, optionally, self-attention blocks in order to gradually synthesize images in a coarse-to-fine manner. Here, we present a new architecture for image generators, where the color value at each pixel is computed independently given the value of a random latent vector and the coordinate of that pixel. No spatial convolutions or similar operations that propagate information across pixels are involved during the synthesis. We analyze the modeling capabilities of such generators when trained in an adversarial fashion, and observe the new generators to achieve similar generation quality to state-of-the-art convolutional generators. We also investigate several interesting properties unique to the new architecture.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Generation FFHQ 1024 x 1024 CIPS FID 10.07 # 16
Image Generation FFHQ 256 x 256 CIPS FID 4.38 # 14
Image Generation Landscapes 256 x 256 CIPS FID 3.61 # 1
Image Generation LSUN Churches 256 x 256 CIPS FID 2.92 # 5
Image Generation Satellite-Buildings 256 x 256 CIPS FID 69.67 # 1
Image Generation Satellite-Landscapes 256 x 256 CIPS FID 48.47 # 1

Methods


No methods listed for this paper. Add relevant methods here