StarGAN v2: Diverse Image Synthesis for Multiple Domains

CVPR 2020  ยท  Yunjey Choi, Youngjung Uh, Jaejun Yoo, Jung-Woo Ha ยท

A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited diversity or multiple models for all domains. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines. Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our superiority in terms of visual quality, diversity, and scalability. To better assess image-to-image translation models, we release AFHQ, high-quality animal faces with large inter- and intra-domain differences. The code, pretrained models, and dataset can be found at https://github.com/clovaai/stargan-v2.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Datasets


Introduced in the Paper:

AFHQ

Used in the Paper:

FFHQ CelebA-HQ
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image-to-Image Translation AFHQ StarGAN v2 FID 24.4 # 1
LPIPS 0.524 # 1
Multimodal Unsupervised Image-To-Image Translation AFHQ StarGAN v2 FID 16.2 # 1
Multimodal Unsupervised Image-To-Image Translation CelebA-HQ StarGAN v2 FID 13.73 # 1
Image-to-Image Translation CelebA-HQ StarGAN v2 FID 13.73 # 1
LPIPS 0.428 # 1
Fundus to Angiography Generation Fundus Fluorescein Angiogram Photographs & Colour Fundus Images of Diabetic Patients StarGAN-v2 FID 27.7 # 5
Kernel Inception Distance 0.00118 # 3

Methods


No methods listed for this paper. Add relevant methods here