1 code implementation • 5 Mar 2024 • Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion English, Kyle Lacey, Alex Goodwin, Yannik Marek, Robin Rombach
Rectified flow is a recent generative model formulation that connects data and noise in a straight line.
4 code implementations • 28 Nov 2023 • Axel Sauer, Dominik Lorenz, Andreas Blattmann, Robin Rombach
We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1-4 steps while maintaining high image quality.
2 code implementations • None 2023 • Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, Varun Jampani, Robin Rombach
We then explore the impact of finetuning our base model on high-quality data and train a text-to-video model that is competitive with closed-source video generation.
33 code implementations • CVPR 2022 • Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer
By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond.
Ranked #2 on Layout-to-Image Generation on COCO-Stuff 256x256
2 code implementations • CVPR 2019 • Dominik Lorenz, Leonard Bereska, Timo Milbich, Björn Ommer
Large intra-class variation is the result of changes in multiple object characteristics.
Ranked #3 on Unsupervised Human Pose Estimation on Human3.6M