Image Generation

1996 papers with code • 85 benchmarks • 67 datasets

Image Generation (synthesis) is the task of generating new images from an existing dataset.

  • Unconditional generation refers to generating samples unconditionally from the dataset, i.e. $p(y)$
  • Conditional image generation (subtask) refers to generating samples conditionally from the dataset, based on a label, i.e. $p(y|x)$.

In this section, you can find state-of-the-art leaderboards for unconditional generation. For conditional generation, and other types of image generations, refer to the subtasks.

( Image credit: StyleGAN )

Libraries

Use these libraries to find Image Generation models and implementations

Latest papers with no code

Synthesizing Iris Images using Generative Adversarial Networks: Survey and Comparative Analysis

no code yet • 26 Apr 2024

In this paper, we present a comprehensive review of state-of-the-art GAN-based synthetic iris image generation techniques, evaluating their strengths and limitations in producing realistic and useful iris images that can be used for both training and testing iris recognition systems and presentation attack detectors.

BlenderAlchemy: Editing 3D Graphics with Vision-Language Models

no code yet • 26 Apr 2024

Specifically, we design a vision-based edit generator and state evaluator to work together to find the correct sequence of actions to achieve the goal.

MuseumMaker: Continual Style Customization without Catastrophic Forgetting

no code yet • 25 Apr 2024

To deal with catastrophic forgetting amongst past learned styles, we devise a dual regularization for shared-LoRA module to optimize the direction of model update, which could regularize the diffusion model from both weight and feature aspects, respectively.

Conditional Distribution Modelling for Few-Shot Image Synthesis with Diffusion Models

no code yet • 25 Apr 2024

Few-shot image synthesis entails generating diverse and realistic images of novel categories using only a few example images.

Sketch2Human: Deep Human Generation with Disentangled Geometry and Appearance Control

no code yet • 24 Apr 2024

This work presents Sketch2Human, the first system for controllable full-body human image generation guided by a semantic sketch (for geometry control) and a reference image (for appearance control).

SkinGEN: an Explainable Dermatology Diagnosis-to-Generation Framework with Interactive Vision-Language Models

no code yet • 23 Apr 2024

With the continuous advancement of vision language models (VLMs) technology, remarkable research achievements have emerged in the dermatology field, the fourth most prevalent human disease category.

From Parts to Whole: A Unified Reference Framework for Controllable Human Image Generation

no code yet • 23 Apr 2024

Addressing this, we introduce Parts2Whole, a novel framework designed for generating customized portraits from multiple reference images, including pose images and various aspects of human appearance.

Multimodal Large Language Model is a Human-Aligned Annotator for Text-to-Image Generation

no code yet • 23 Apr 2024

Recent studies have demonstrated the exceptional potentials of leveraging human preference datasets to refine text-to-image generative models, enhancing the alignment between generated images and textual prompts.

FINEMATCH: Aspect-based Fine-grained Image and Text Mismatch Detection and Correction

no code yet • 23 Apr 2024

To address this, we propose FineMatch, a new aspect-based fine-grained text and image matching benchmark, focusing on text and image mismatch detection and correction.

ID-Aligner: Enhancing Identity-Preserving Text-to-Image Generation with Reward Feedback Learning

no code yet • 23 Apr 2024

The rapid development of diffusion models has triggered diverse applications.