Search Results for author: Richard Szeliski

Found 13 papers, 3 papers with code

Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis

no code implementations19 Feb 2024 Christian Reiser, Stephan Garbin, Pratul P. Srinivasan, Dor Verbin, Richard Szeliski, Ben Mildenhall, Jonathan T. Barron, Peter Hedman, Andreas Geiger

Third, we minimize the binary entropy of the opacity values, which facilitates the extraction of surface geometry by encouraging opacity values to binarize towards the end of training.

SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration

no code implementations12 Dec 2023 Daniel Duckworth, Peter Hedman, Christian Reiser, Peter Zhizhin, Jean-François Thibert, Mario Lučić, Richard Szeliski, Jonathan T. Barron

Recent techniques for real-time view synthesis have rapidly advanced in fidelity and speed, and modern methods are capable of rendering near-photorealistic scenes at interactive frame rates.

Novel View Synthesis

BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis

no code implementations28 Feb 2023 Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P. Srinivasan, Richard Szeliski, Jonathan T. Barron, Ben Mildenhall

We present a method for reconstructing high-quality meshes of large unbounded real-world scenes suitable for photorealistic novel view synthesis.

Novel View Synthesis

MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes

no code implementations23 Feb 2023 Christian Reiser, Richard Szeliski, Dor Verbin, Pratul P. Srinivasan, Ben Mildenhall, Andreas Geiger, Jonathan T. Barron, Peter Hedman

We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field.

Animating Pictures with Eulerian Motion Fields

no code implementations CVPR 2021 Aleksander Holynski, Brian Curless, Steven M. Seitz, Richard Szeliski

In this paper, we demonstrate a fully automatic method for converting a still image into a realistic animated looping video.

Image-to-Image Translation Translation

Reducing Drift in Structure From Motion Using Extended Features

no code implementations27 Aug 2020 Aleksander Holynski, David Geraghty, Jan-Michael Frahm, Chris Sweeney, Richard Szeliski

Low-frequency long-range errors (drift) are an endemic problem in 3D structure from motion, and can often hamper reasonable reconstructions of the scene.

Consistent Video Depth Estimation

3 code implementations30 Apr 2020 Xuan Luo, Jia-Bin Huang, Richard Szeliski, Kevin Matzen, Johannes Kopf

We present an algorithm for reconstructing dense, geometrically consistent depth for all pixels in a monocular video.

Depth Estimation Monocular Reconstruction

SynSin: End-to-end View Synthesis from a Single Image

3 code implementations CVPR 2020 Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson

Single image view synthesis allows for the generation of new views of a scene given a single input image.

Novel View Synthesis

Model-Based Tracking at 300Hz Using Raw Time-of-Flight Observations

no code implementations ICCV 2015 Jan Stuhmer, Sebastian Nowozin, Andrew Fitzgibbon, Richard Szeliski, Travis Perry, Sunil Acharya, Daniel Cremers, Jamie Shotton

In this paper, we show how to perform model-based object tracking which allows to reconstruct the object's depth at an order of magnitude higher frame-rate through simple modifications to an off-the-shelf depth camera.

Object Tracking

Efficient High-Resolution Stereo Matching using Local Plane Sweeps

no code implementations CVPR 2014 Sudipta N. Sinha, Daniel Scharstein, Richard Szeliski

We present a stereo algorithm designed for speed and efficiency that uses local slanted plane sweeps to propose disparity hypotheses for a semi-global matching algorithm.

Clustering Stereo Matching +2

Cannot find the paper you are looking for? You can Submit a new open access paper.