K-Planes: Explicit Radiance Fields in Space, Time, and Appearance

We introduce k-planes, a white-box model for radiance fields in arbitrary dimensions. Our model uses d choose 2 planes to represent a d-dimensional scene, providing a seamless way to go from static (d=3) to dynamic (d=4) scenes. This planar factorization makes adding dimension-specific priors easy, e.g. temporal smoothness and multi-resolution spatial structure, and induces a natural decomposition of static and dynamic components of a scene. We use a linear feature decoder with a learned color basis that yields similar performance as a nonlinear black-box MLP decoder. Across a range of synthetic and real, static and dynamic, fixed and varying appearance scenes, k-planes yields competitive and often state-of-the-art reconstruction fidelity with low memory usage, achieving 1000x compression over a full 4D grid, and fast optimization with a pure PyTorch implementation. For video results and code, please see https://sarafridov.github.io/K-Planes.

PDF Abstract CVPR 2023 PDF CVPR 2023 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Novel View Synthesis LLFF Plenoxels PSNR 26.29 # 8
Novel View Synthesis LLFF K-Planes (explicit) PSNR 26.78 # 5
SSIM 0.841 # 4
Novel View Synthesis LLFF K-Planes (hybrid) PSNR 26.92 # 2
SSIM 0.847 # 3
Novel View Synthesis LLFF TensoRF PSNR 26.73 # 6
SSIM 0.839 # 5
Novel View Synthesis NeRF I-NGP PSNR 33.18 # 1
Novel View Synthesis NeRF Plenoxels PSNR 31.71 # 5
SSIM 0.958 # 4
Novel View Synthesis NeRF K-Planes (explicit) PSNR 32.21 # 4
SSIM 0.964 # 2
Novel View Synthesis NeRF K-Planes (hybrid) PSNR 32.36 # 3
SSIM 0.967 # 1
Novel View Synthesis NeRF TensoRF PSNR 33.14 # 2
SSIM 0.963 # 3

Methods


No methods listed for this paper. Add relevant methods here