Conditional Image Generation
133 papers with code • 10 benchmarks • 8 datasets
Conditional image generation is the task of generating new images from a dataset conditional on their class.
( Image credit: PixelCNN++ )
Libraries
Use these libraries to find Conditional Image Generation models and implementationsDatasets
Latest papers with no code
Watermark-embedded Adversarial Examples for Copyright Protection against Diffusion Models
Diffusion Models (DMs) have shown remarkable capabilities in various image-generation tasks.
In-Context Translation: Towards Unifying Image Recognition, Processing, and Generation
Secondly, it standardizes the training of different tasks into a general in-context learning, where "in-context" means the input comprises an example input-output pair of the target task and a query image.
Condition-Aware Neural Network for Controlled Image Generation
In parallel to prior conditional control methods, CAN controls the image generation process by dynamically manipulating the weight of the neural network.
Conditional Wasserstein Distances with Applications in Bayesian OT Flow Matching
In inverse problems, many conditional generative models approximate the posterior measure by minimizing a distance between the joint measure and its learned approximation.
Attack Deterministic Conditional Image Generative Models for Diverse and Controllable Generation
Given that many deterministic conditional image generative models have been able to produce high-quality yet fixed results, we raise an intriguing question: is it possible for pre-trained deterministic conditional image generative models to generate diverse results without changing network structures or parameters?
Bespoke Non-Stationary Solvers for Fast Sampling of Diffusion and Flow Models
This paper introduces Bespoke Non-Stationary (BNS) Solvers, a solver distillation approach to improve sample efficiency of Diffusion and Flow models.
Rethinking cluster-conditioned diffusion models
We present a comprehensive experimental study on image-level conditioning for diffusion models using cluster assignments.
Structure-Guided Adversarial Training of Diffusion Models
In this pioneering approach, we compel the model to learn manifold structures between samples in each training batch.
UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion
This paper presents UNIMO-G, a simple multimodal conditional diffusion framework that operates on multimodal prompts with interleaved textual and visual inputs, which demonstrates a unified ability for both text-driven and subject-driven image generation.
CIMGEN: Controlled Image Manipulation by Finetuning Pretrained Generative Models on Limited Data
Content creation and image editing can benefit from flexible user controls.