Image-to-Image Translation

490 papers with code • 37 benchmarks • 29 datasets

Image-to-Image Translation is a task in computer vision and machine learning where the goal is to learn a mapping between an input image and an output image, such that the output image can be used to perform a specific task, such as style transfer, data augmentation, or image restoration.

( Image credit: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks )

Libraries

Use these libraries to find Image-to-Image Translation models and implementations

Most implemented papers

Learning to Adapt Structured Output Space for Semantic Segmentation

wasidennis/AdaptSegNet CVPR 2018

In this paper, we propose an adversarial learning method for domain adaptation in the context of semantic segmentation.

Joint Discriminative and Generative Learning for Person Re-identification

layumi/Person_reID_baseline_pytorch CVPR 2019

To this end, we propose a joint learning framework that couples re-id learning and data generation end-to-end.

Taming Transformers for High-Resolution Image Synthesis

CompVis/taming-transformers CVPR 2021

We demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images.

Multimodal Token Fusion for Vision Transformers

huawei-noah/noah-research journal 2022

Many adaptations of transformers have emerged to address the single-modal vision tasks, where self-attention modules are stacked to handle input sources like images.

Few-Shot Unsupervised Image-to-Image Translation

NVlabs/FUNIT ICCV 2019

Unsupervised image-to-image translation methods learn to map images in a given class to an analogous image in a different class, drawing on unstructured (non-registered) datasets of images.

Contrastive Learning for Unpaired Image-to-Image Translation

taesungp/contrastive-unpaired-translation 30 Jul 2020

Furthermore, we draw negatives from within the input image itself, rather than from the rest of the dataset.

Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation

eladrich/pixel2style2pixel CVPR 2021

We present a generic image-to-image translation framework, pixel2style2pixel (pSp).

Adversarially Learned Inference

IshmaelBelghazi/ALI 2 Jun 2016

We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process.

Learning from Simulated and Unsupervised Images through Adversarial Training

carpedm20/simulated-unsupervised-tensorflow CVPR 2017

With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations.

Unsupervised Image-to-Image Translation Networks

mingyuliutw/UNIT NeurIPS 2017

Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains.