DiffiT: Diffusion Vision Transformers for Image Generation

4 Dec 2023  ·  Ali Hatamizadeh, Jiaming Song, Guilin Liu, Jan Kautz, Arash Vahdat ·

Diffusion models with their powerful expressivity and high sample quality have achieved State-Of-The-Art (SOTA) performance in the generative domain. The pioneering Vision Transformer (ViT) has also demonstrated strong modeling capabilities and scalability, especially for recognition tasks. In this paper, we study the effectiveness of ViTs in diffusion-based generative learning and propose a new model denoted as Diffusion Vision Transformers (DiffiT). Specifically, we propose a methodology for finegrained control of the denoising process and introduce the Time-dependant Multihead Self Attention (TMSA) mechanism. DiffiT is surprisingly effective in generating high-fidelity images with significantly better parameter efficiency. We also propose latent and image space DiffiT models and show SOTA performance on a variety of class-conditional and unconditional synthesis tasks at different resolutions. The Latent DiffiT model achieves a new SOTA FID score of 1.73 on ImageNet-256 dataset while having 19.85%, 16.88% less parameters than other Transformer-based diffusion models such as MDT and DiT, respectively. Code: https://github.com/NVlabs/DiffiT

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Generation CIFAR-10 DiffiT FID 1.95 # 15
Image Generation ImageNet 256x256 DiffiT FID 1.73 # 4
Image Generation ImageNet 512x512 DiffiT FID 2.67 # 10
Inception score 252.12 # 5

Methods