UPGPT: Universal Diffusion Model for Person Image Generation, Editing and Pose Transfer

18 Apr 2023  ·  Soon Yau Cheong, Armin Mustafa, Andrew Gilbert ·

Text-to-image models (T2I) such as StableDiffusion have been used to generate high quality images of people. However, due to the random nature of the generation process, the person has a different appearance e.g. pose, face, and clothing, despite using the same text prompt. The appearance inconsistency makes T2I unsuitable for pose transfer. We address this by proposing a multimodal diffusion model that accepts text, pose, and visual prompting. Our model is the first unified method to perform all person image tasks - generation, pose transfer, and mask-less edit. We also pioneer using small dimensional 3D body model parameters directly to demonstrate new capability - simultaneous pose and camera view interpolation while maintaining the person's appearance.

PDF Abstract

Results from the Paper


 Ranked #1 on Pose Transfer on Deep-Fashion (FID metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Pose Transfer Deep-Fashion UPGPT FID 9.427 # 1

Methods