no code implementations • 19 Feb 2024 • Christian Reiser, Stephan Garbin, Pratul P. Srinivasan, Dor Verbin, Richard Szeliski, Ben Mildenhall, Jonathan T. Barron, Peter Hedman, Andreas Geiger
Third, we minimize the binary entropy of the opacity values, which facilitates the extraction of surface geometry by encouraging opacity values to binarize towards the end of training.
no code implementations • 20 Dec 2023 • Fangjinhua Wang, Marie-Julie Rakotosaona, Michael Niemeyer, Richard Szeliski, Marc Pollefeys, Federico Tombari
In this work, we propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections.
no code implementations • 12 Dec 2023 • Daniel Duckworth, Peter Hedman, Christian Reiser, Peter Zhizhin, Jean-François Thibert, Mario Lučić, Richard Szeliski, Jonathan T. Barron
Recent techniques for real-time view synthesis have rapidly advanced in fidelity and speed, and modern methods are capable of rendering near-photorealistic scenes at interactive frame rates.
no code implementations • 28 Feb 2023 • Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P. Srinivasan, Richard Szeliski, Jonathan T. Barron, Ben Mildenhall
We present a method for reconstructing high-quality meshes of large unbounded real-world scenes suitable for photorealistic novel view synthesis.
no code implementations • 23 Feb 2023 • Christian Reiser, Richard Szeliski, Dor Verbin, Pratul P. Srinivasan, Ben Mildenhall, Andreas Geiger, Jonathan T. Barron, Peter Hedman
We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field.
no code implementations • CVPR 2023 • Hong-Xing Yu, Samir Agarwala, Charles Herrmann, Richard Szeliski, Noah Snavely, Jiajun Wu, Deqing Sun
Recovering lighting in a scene from a single image is a fundamental problem in computer vision.
no code implementations • CVPR 2021 • Aleksander Holynski, Brian Curless, Steven M. Seitz, Richard Szeliski
In this paper, we demonstrate a fully automatic method for converting a still image into a realistic animated looping video.
no code implementations • 27 Aug 2020 • Aleksander Holynski, David Geraghty, Jan-Michael Frahm, Chris Sweeney, Richard Szeliski
Low-frequency long-range errors (drift) are an endemic problem in 3D structure from motion, and can often hamper reasonable reconstructions of the scene.
3 code implementations • 30 Apr 2020 • Xuan Luo, Jia-Bin Huang, Richard Szeliski, Kevin Matzen, Johannes Kopf
We present an algorithm for reconstructing dense, geometrically consistent depth for all pixels in a monocular video.
3 code implementations • CVPR 2020 • Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson
Single image view synthesis allows for the generation of new views of a scene given a single input image.
no code implementations • ICCV 2015 • Jan Stuhmer, Sebastian Nowozin, Andrew Fitzgibbon, Richard Szeliski, Travis Perry, Sunil Acharya, Daniel Cremers, Jamie Shotton
In this paper, we show how to perform model-based object tracking which allows to reconstruct the object's depth at an order of magnitude higher frame-rate through simple modifications to an off-the-shelf depth camera.
no code implementations • CVPR 2014 • Sudipta N. Sinha, Daniel Scharstein, Richard Szeliski
We present a stereo algorithm designed for speed and efficiency that uses local slanted plane sweeps to propose disparity hypotheses for a semi-global matching algorithm.
1 code implementation • International Journal of Computer Vision 2010 • Simon Baker, Daniel Scharstein, J. P. Lewis, Stefan Roth, Michael J. Black, Richard Szeliski
The quantitative evaluation of optical flow algorithms by Barron et al. (1994) led to significant advances in performance.