Conditional Image Generation
133 papers with code • 10 benchmarks • 8 datasets
Conditional image generation is the task of generating new images from a dataset conditional on their class.
( Image credit: PixelCNN++ )
Libraries
Use these libraries to find Conditional Image Generation models and implementationsDatasets
Latest papers
Elucidating The Design Space of Classifier-Guided Diffusion Generation
Guidance in conditional diffusion generation is of great importance for sample quality and controllability.
Posterior Sampling Based on Gradient Flows of the MMD with Negative Distance Kernel
We propose conditional flows of the maximum mean discrepancy (MMD) with the negative distance kernel for posterior sampling and conditional generative modeling.
ImagenHub: Standardizing the evaluation of conditional image generation models
Recently, a myriad of conditional image generation and editing models have been developed to serve different downstream tasks, including text-to-image generation, text-guided image editing, subject-driven image generation, control-guided image generation, etc.
Diverse Semantic Image Editing with Style Codes
Semantic image editing requires inpainting pixels following a semantic map.
DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models
In this study, we aim to extend the capabilities of diffusion-based text-to-image (T2I) generation models by incorporating diverse modalities beyond textual description, such as sketch, box, color palette, and style embedding, within a single model.
Late-Constraint Diffusion Guidance for Controllable Image Synthesis
Specifically, we train a lightweight condition adapter to establish the correlation between external conditions and internal representations of diffusion models.
SparseGNV: Generating Novel Views of Indoor Scenes with Sparse Input Views
We study to generate novel views of indoor scenes given sparse input views.
NoisyTwins: Class-Consistent and Diverse Image Generation through StyleGANs
We find that one reason for degradation is the collapse of latents for each class in the $\mathcal{W}$ latent space.
Trade-offs in Fine-tuned Diffusion Models Between Accuracy and Interpretability
Recent advancements in diffusion models have significantly impacted the trajectory of generative machine learning research, with many adopting the strategy of fine-tuning pre-trained models using domain-specific text-to-image datasets.
Polynomial Implicit Neural Representations For Large Diverse Datasets
With much fewer training parameters and higher representative power, our approach paves the way for broader adoption of INR models for generative modeling tasks in complex domains.