Large Scale Image Completion via Co-Modulated Generative Adversarial Networks

Numerous task-specific variants of conditional generative adversarial networks have been developed for image completion. Yet, a serious limitation remains that all existing algorithms tend to fail when handling large-scale missing regions. To overcome this challenge, we propose a generic new approach that bridges the gap between image-conditional and recent modulated unconditional generative architectures via co-modulation of both conditional and stochastic style representations. Also, due to the lack of good quantitative metrics for image completion, we propose the new Paired/Unpaired Inception Discriminative Score (P-IDS/U-IDS), which robustly measures the perceptual fidelity of inpainted images compared to real images via linear separability in a feature space. Experiments demonstrate superior performance in terms of both quality and diversity over state-of-the-art methods in free-form image completion and easy generalization to image-to-image translation. Code is available at https://github.com/zsyzzsoft/co-mod-gan.

PDF Abstract ICLR 2021 PDF ICLR 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Inpainting CelebA-HQ CoModGAN FID 5.65 # 3
P-IDS 11.23 # 2
U-IDS 22.54 # 2
Image Inpainting FFHQ 512 x 512 CoModGAN P-IDS 16.6% # 1
U-IDS 29.4% # 1
FID 3.7 # 3
Image Inpainting Places2 CoModGAN FID 2.92 # 3
P-IDS 19.64 # 3
U-IDS 35.78 # 3

Methods


No methods listed for this paper. Add relevant methods here