Conditional Image Generation

131 papers with code • 10 benchmarks • 8 datasets

Conditional image generation is the task of generating new images from a dataset conditional on their class.

( Image credit: PixelCNN++ )

Libraries

Use these libraries to find Conditional Image Generation models and implementations

Latest papers with no code

ArcGAN: Generative Adversarial Networks for 3D Architectural Image Generation

no code yet • IDEA 2K22 2023

Due to advancements in infrastructural modulations, architectural design is one of the most peculiar and tedious processes.

Manifold Preserving Guided Diffusion

no code yet • 28 Nov 2023

Despite the recent advancements, conditional image generation still faces challenges of cost, generalizability, and the need for task-specific training.

Guided Flows for Generative Modeling and Decision Making

no code yet • 22 Nov 2023

Classifier-free guidance is a key component for enhancing the performance of conditional generative models across diverse tasks.

Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional Image Synthesis

no code yet • ICCV 2023

To this end, and capitalizing on the powerful fine-grained generative control offered by the recent diffusion-based generative models, we introduce Steered Diffusion, a generalized framework for photorealistic zero-shot conditional image generation using a diffusion model trained for unconditional generation.

Diff-Retinex: Rethinking Low-light Image Enhancement with A Generative Diffusion Model

no code yet • ICCV 2023

Therefore, Diff-Retinex formulates the low-light image enhancement problem into Retinex decomposition and conditional image generation.

Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and Uncurated Unlabeled Data

no code yet • 17 Jul 2023

Label-noise or curated unlabeled data is used to compensate for the assumption of clean labeled data in training the conditional generative adversarial network; however, satisfying such an extended assumption is occasionally laborious or impractical.

AniFaceDrawing: Anime Portrait Exploration during Your Sketching

no code yet • 13 Jun 2023

In the second stage, we simulated the drawing process of the generated images without any additional data (labels) and trained the sketch encoder for incomplete progressive sketches to generate high-quality portrait images with feature alignment to the disentangled representations in the teacher encoder.

SyncDiffusion: Coherent Montage via Synchronized Joint Diffusions

no code yet • NeurIPS 2023

Specifically, we compute the gradient of the perceptual loss using the predicted denoised images at each denoising step, providing meaningful guidance for achieving coherent montages.

Cocktail: Mixing Multi-Modality Controls for Text-Conditional Image Generation

no code yet • 1 Jun 2023

In this work, we propose Cocktail, a pipeline to mix various modalities into one embedding, amalgamated with a generalized ControlNet (gControlNet), a controllable normalisation (ControlNorm), and a spatial guidance sampling method, to actualize multi-modal and spatially-refined control for text-conditional diffusion models.

DuDGAN: Improving Class-Conditional GANs via Dual-Diffusion

no code yet • 24 May 2023

We evaluated our method using the AFHQ, Food-101, and CIFAR-10 datasets and observed superior results across metrics such as FID, KID, Precision, and Recall score compared with comparison models, highlighting the effectiveness of our approach.