DiffDreamer: Towards Consistent Unsupervised Single-view Scene Extrapolation with Conditional Diffusion Models

Scene extrapolation -- the idea of generating novel views by flying into a given image -- is a promising, yet challenging task. For each predicted frame, a joint inpainting and 3D refinement problem has to be solved, which is ill posed and includes a high level of ambiguity. Moreover, training data for long-range scenes is difficult to obtain and usually lacks sufficient views to infer accurate camera poses. We introduce DiffDreamer, an unsupervised framework capable of synthesizing novel views depicting a long camera trajectory while training solely on internet-collected images of nature scenes. Utilizing the stochastic nature of the guided denoising steps, we train the diffusion models to refine projected RGBD images but condition the denoising steps on multiple past and future frames for inference. We demonstrate that image-conditioned diffusion models can effectively perform long-range scene extrapolation while preserving consistency significantly better than prior GAN-based methods. DiffDreamer is a powerful and efficient solution for scene extrapolation, producing impressive results despite limited supervision. Project page: https://primecai.github.io/diffdreamer.

PDF Abstract ICCV 2023 PDF ICCV 2023 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Perpetual View Generation LHQ InfNat-Zero FID (first 20 steps) 39.45 # 2
IS (first 20 steps) 2.8 # 2
KID (first 20 steps) 0.12 # 2
FID (full 100 steps) 26.24 # 1
IS (full 100 steps) 2.72 # 2
KID (full 100 steps) 0.12 # 1
Perpetual View Generation LHQ DiffDreamer FID (first 20 steps) 34.49 # 1
IS (first 20 steps) 2.82 # 1
KID (first 20 steps) 0.08 # 1
FID (full 100 steps) 51 # 2
IS (full 100 steps) 2.99 # 1
KID (full 100 steps) 0.28 # 2

Methods