Search Results for author: Ravi Ramamoorthi

Found 57 papers, 13 papers with code

RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion

no code implementations10 Apr 2024 Jaidev Shriram, Alex Trevithick, Lingjie Liu, Ravi Ramamoorthi

We introduce RealmDreamer, a technique for generation of general forward-facing 3D scenes from text descriptions.

3D Inpainting Scene Generation

Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D

no code implementations27 Mar 2024 Mukund Varma T, Peihao Wang, Zhiwen Fan, Zhangyang Wang, Hao Su, Ravi Ramamoorthi

In recent years, there has been an explosion of 2D vision models for numerous tasks such as semantic segmentation, style transfer or scene editing, enabled by large-scale 2D image datasets.

Colorization Image Colorization +3

What You See is What You GAN: Rendering Every Pixel for High-Fidelity Geometry in 3D GANs

no code implementations4 Jan 2024 Alex Trevithick, Matthew Chan, Towaki Takikawa, Umar Iqbal, Shalini De Mello, Manmohan Chandraker, Ravi Ramamoorthi, Koki Nagano

3D-aware Generative Adversarial Networks (GANs) have shown remarkable progress in learning to generate multi-view-consistent images and 3D geometries of scenes from collections of 2D images via neural volume rendering.

Neural Rendering Super-Resolution

OpenIllumination: A Multi-Illumination Dataset for Inverse Rendering Evaluation on Real Objects

no code implementations NeurIPS 2023 Isabella Liu, Linghao Chen, Ziyang Fu, Liwen Wu, Haian Jin, Zhong Li, Chin Ming Ryan Wong, Yi Xu, Ravi Ramamoorthi, Zexiang Xu, Hao Su

We introduce OpenIllumination, a real-world dataset containing over 108K images of 64 objects with diverse materials, captured under 72 camera views and a large number of different illuminations.

Foreground Segmentation Inverse Rendering

A Theory of Topological Derivatives for Inverse Rendering of Geometry

no code implementations ICCV 2023 Ishit Mehta, Manmohan Chandraker, Ravi Ramamoorthi

We introduce a theoretical framework for differentiable surface evolution that allows discrete topology changes through the use of topological derivatives for variational optimization of image functionals.

3D Reconstruction Image Reconstruction +3

NeRFs: The Search for the Best 3D Representation

no code implementations5 Aug 2023 Ravi Ramamoorthi

Neural Radiance Fields or NeRFs have become the representation of choice for problems in view synthesis or image-based rendering, as well as in many other applications across computer graphics and vision, and beyond.

Neural Free-Viewpoint Relighting for Glossy Indirect Illumination

no code implementations12 Jul 2023 Nithin Raghavan, Yan Xiao, Kai-En Lin, Tiancheng Sun, Sai Bi, Zexiang Xu, Tzu-Mao Li, Ravi Ramamoorthi

In this paper, we demonstrate a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view.

Tensor Decomposition

PVP: Personalized Video Prior for Editable Dynamic Portraits using StyleGAN

no code implementations29 Jun 2023 Kai-En Lin, Alex Trevithick, Keli Cheng, Michel Sarkis, Mohsen Ghafoorian, Ning Bi, Gerhard Reitmayr, Ravi Ramamoorthi

In this work, our goal is to take as input a monocular video of a face, and create an editable dynamic portrait able to handle extreme head poses.

Face Generation

Real-Time Radiance Fields for Single-Image Portrait View Synthesis

no code implementations3 May 2023 Alex Trevithick, Matthew Chan, Michael Stengel, Eric R. Chan, Chao Liu, Zhiding Yu, Sameh Khamis, Manmohan Chandraker, Ravi Ramamoorthi, Koki Nagano

We present a one-shot method to infer and render a photorealistic 3D representation from a single unposed image (e. g., face portrait) in real-time.

Data Augmentation Novel View Synthesis

NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion

no code implementations20 Feb 2023 Jiatao Gu, Alex Trevithick, Kai-En Lin, Josh Susskind, Christian Theobalt, Lingjie Liu, Ravi Ramamoorthi

Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.

Novel View Synthesis

Vision Transformer for NeRF-Based View Synthesis from a Single Input Image

1 code implementation12 Jul 2022 Kai-En Lin, Lin Yen-Chen, Wei-Sheng Lai, Tsung-Yi Lin, Yi-Chang Shih, Ravi Ramamoorthi

Existing approaches condition on local image features to reconstruct a 3D object, but often render blurry predictions at viewpoints that are far away from the source view.

Novel View Synthesis

Physically-Based Editing of Indoor Scene Lighting from a Single Image

no code implementations19 May 2022 Zhengqin Li, Jia Shi, Sai Bi, Rui Zhu, Kalyan Sunkavalli, Miloš Hašan, Zexiang Xu, Ravi Ramamoorthi, Manmohan Chandraker

We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.

Inverse Rendering Lighting Estimation +1

A Level Set Theory for Neural Implicit Evolution under Explicit Flows

no code implementations14 Apr 2022 Ishit Mehta, Manmohan Chandraker, Ravi Ramamoorthi

Our method uses the flow field to deform parametric implicit surfaces by extending the classical theory of level sets.

Inverse Rendering

Learning Neural Transmittance for Efficient Rendering of Reflectance Fields

no code implementations25 Oct 2021 Mohammad Shafiei, Sai Bi, Zhengqin Li, Aidas Liaudanskas, Rodrigo Ortiz-Cayon, Ravi Ramamoorthi

However, it remains challenging and time-consuming to render such representations under complex lighting such as environment maps, which requires individual ray marching towards each single light to calculate the transmittance at every sampled point.

View Synthesis of Dynamic Scenes based on Deep 3D Mask Volume

no code implementations ICCV 2021 Kai-En Lin, Guowei Yang, Lei Xiao, Feng Liu, Ravi Ramamoorthi

Image view synthesis has seen great success in reconstructing photorealistic visuals, thanks to deep learning and various novel representations.

NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting

no code implementations26 Jul 2021 Tiancheng Sun, Kai-En Lin, Sai Bi, Zexiang Xu, Ravi Ramamoorthi

Our system is trained on a large number of synthetic models, and can generalize to different synthetic and real portraits under various lighting conditions.

OpenRooms: An Open Framework for Photorealistic Indoor Scene Datasets

no code implementations CVPR 2021 Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, YuHan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Hong-Xing Yu, Zexiang Xu, Kalyan Sunkavalli, Milos Hasan, Ravi Ramamoorthi, Manmohan Chandraker

Finally, we demonstrate that our framework may also be integrated with physics engines, to create virtual robotics environments with unique ground truth such as friction coefficients and correspondence to real scenes.

Friction Inverse Rendering +1

Modulated Periodic Activations for Generalizable Local Functional Representations

2 code implementations ICCV 2021 Ishit Mehta, Michaël Gharbi, Connelly Barnes, Eli Shechtman, Ravi Ramamoorthi, Manmohan Chandraker

Our approach produces generalizable functional representations of images, videos and shapes, and achieves higher reconstruction quality than prior works that are optimized for a single signal.

NeuMIP: Multi-Resolution Neural Materials

no code implementations6 Apr 2021 Alexandr Kuznetsov, Krishna Mullia, Zexiang Xu, Miloš Hašan, Ravi Ramamoorthi

We also introduce neural offsets, a novel method which allows rendering materials with intricate parallax effects without any tessellation.

Photon-Driven Neural Path Guiding

no code implementations5 Oct 2020 Shilin Zhu, Zexiang Xu, Tiancheng Sun, Alexandr Kuznetsov, Mark Meyer, Henrik Wann Jensen, Hao Su, Ravi Ramamoorthi

To fully make use of our deep neural network, we partition the scene space into an adaptive hierarchical grid, in which we apply our network to reconstruct high-quality sampling distributions for any local region in the scene.

Real-Time Selfie Video Stabilization

1 code implementation CVPR 2021 Jiyang Yu, Ravi Ramamoorthi, Keli Cheng, Michel Sarkis, Ning Bi

Our method is fully automatic and produces visually and quantitatively better results than previous real-time general video stabilization methods.

Video Stabilization

Neural Light Transport for Relighting and View Synthesis

1 code implementation9 Aug 2020 Xiuming Zhang, Sean Fanello, Yun-Ta Tsai, Tiancheng Sun, Tianfan Xue, Rohit Pandey, Sergio Orts-Escolano, Philip Davidson, Christoph Rhemann, Paul Debevec, Jonathan T. Barron, Ravi Ramamoorthi, William T. Freeman

In particular, we show how to fuse previously seen observations of illuminants and views to synthesize a new image of the same scene under a desired lighting condition from a chosen viewpoint.

Neural Reflectance Fields for Appearance Acquisition

no code implementations9 Aug 2020 Sai Bi, Zexiang Xu, Pratul Srinivasan, Ben Mildenhall, Kalyan Sunkavalli, Miloš Hašan, Yannick Hold-Geoffroy, David Kriegman, Ravi Ramamoorthi

We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.

OpenRooms: An End-to-End Open Framework for Photorealistic Indoor Scene Datasets

no code implementations25 Jul 2020 Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, YuHan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Zexiang Xu, Hong-Xing Yu, Kalyan Sunkavalli, Miloš Hašan, Ravi Ramamoorthi, Manmohan Chandraker

Finally, we demonstrate that our framework may also be integrated with physics engines, to create virtual robotics environments with unique ground truth such as friction coefficients and correspondence to real scenes.

Friction Inverse Rendering +2

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

13 code implementations NeurIPS 2020 Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, Ren Ng

We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains.

Deep Photon Mapping

no code implementations25 Apr 2020 Shilin Zhu, Zexiang Xu, Henrik Wann Jensen, Hao Su, Ravi Ramamoorthi

This network is easy to incorporate in many previous photon mapping methods (by simply swapping the kernel density estimator) and can produce high-quality reconstructions of complex global illumination effects like caustics with an order of magnitude fewer photons compared to previous photon mapping methods.

Denoising Density Estimation

Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images

no code implementations CVPR 2020 Sai Bi, Zexiang Xu, Kalyan Sunkavalli, David Kriegman, Ravi Ramamoorthi

We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object from a sparse set of only six images captured by wide-baseline cameras under collocated point lighting.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

36 code implementations ECCV 2020 Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng

Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.

Generalizable Novel View Synthesis Low-Dose X-Ray Ct Reconstruction +2

Deep Stereo using Adaptive Thin Volume Representation with Uncertainty Awareness

1 code implementation CVPR 2020 Shuo Cheng, Zexiang Xu, Shilin Zhu, Zhuwen Li, Li Erran Li, Ravi Ramamoorthi, Hao Su

In contrast, we propose adaptive thin volumes (ATVs); in an ATV, the depth hypothesis of each plane is spatially varying, which adapts to the uncertainties of previous per-pixel depth predictions.

3D Reconstruction Point Clouds

Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF from a Single Image

1 code implementation CVPR 2020 Zhengqin Li, Mohammad Shafiei, Ravi Ramamoorthi, Kalyan Sunkavalli, Manmohan Chandraker

Our inverse rendering network incorporates physical insights -- including a spatially-varying spherical Gaussian lighting representation, a differentiable rendering layer to model scene appearance, a cascade structure to iteratively refine the predictions and a bilateral solver for refinement -- allowing us to jointly reason about shape, lighting, and reflectance.

Inverse Rendering

Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines

1 code implementation2 May 2019 Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, Abhishek Kar

We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration.

Novel View Synthesis

Pushing the Boundaries of View Extrapolation with Multiplane Images

1 code implementation CVPR 2019 Pratul P. Srinivasan, Richard Tucker, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng, Noah Snavely

We present a theoretical analysis showing how the range of views that can be rendered from an MPI increases linearly with the MPI disparity sampling frequency, as well as a novel MPI prediction procedure that theoretically enables view extrapolations of up to $4\times$ the lateral viewpoint movement allowed by prior work.

Fast and Full-Resolution Light Field Deblurring using a Deep Neural Network

no code implementations31 Mar 2019 Jonathan Samuel Lumentut, Tae Hyun Kim, Ravi Ramamoorthi, In Kyu Park

Restoring a sharp light field image from its blurry input has become essential due to the increasing popularity of parallax-based image processing.

16k Deblurring

Deep Hybrid Real and Synthetic Training for Intrinsic Decomposition

no code implementations30 Jul 2018 Sai Bi, Nima Khademi Kalantari, Ravi Ramamoorthi

Experimental results show that our approach produces better results than the state-of-the-art DL and non-DL methods on various synthetic and real datasets both visually and numerically.

Intrinsic Image Decomposition

Image to Image Translation for Domain Adaptation

no code implementations CVPR 2018 Zak Murez, Soheil Kolouri, David Kriegman, Ravi Ramamoorthi, Kyungnam Kim

This is achieved by adding extra networks and losses that help regularize the features extracted by the backbone encoder network.

Image-to-Image Translation Translation +1

Depth and Image Restoration From Light Field in a Scattering Medium

no code implementations ICCV 2017 Jiandong Tian, Zachary Murez, Tong Cui, Zhen Zhang, David Kriegman, Ravi Ramamoorthi

First, we present a new single image restoration algorithm which removes backscatter and attenuation from images better than existing methods, and apply it to each view in the light field.

Depth Estimation Image Restoration

Learning to Synthesize a 4D RGBD Light Field from a Single Image

1 code implementation ICCV 2017 Pratul P. Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, Ren Ng

We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction).

Depth Estimation

Robust Energy Minimization for BRDF-Invariant Shape From Light Fields

no code implementations CVPR 2017 Zhengqin Li, Zexiang Xu, Ravi Ramamoorthi, Manmohan Chandraker

On the other hand, recent works have explored PDE invariants for shape recovery with complex BRDFs, but they have not been incorporated into robust numerical optimization frameworks.

Light Field Video Capture Using a Learning-Based Hybrid Imaging System

1 code implementation8 May 2017 Ting-Chun Wang, Jun-Yan Zhu, Nima Khademi Kalantari, Alexei A. Efros, Ravi Ramamoorthi

Given a 3 fps light field sequence and a standard 30 fps 2D video, our system can then generate a full light field video at 30 fps.

Light Field Blind Motion Deblurring

no code implementations CVPR 2017 Pratul P. Srinivasan, Ren Ng, Ravi Ramamoorthi

We study the problem of deblurring light fields of general 3D scenes captured under 3D camera motion and present both theoretical and practical contributions.

Deblurring

Learning-Based View Synthesis for Light Field Cameras

no code implementations9 Sep 2016 Nima Khademi Kalantari, Ting-Chun Wang, Ravi Ramamoorthi

Specifically, we propose a novel learning-based approach to synthesize new views from a sparse set of input views.

A 4D Light-Field Dataset and CNN Architectures for Material Recognition

no code implementations24 Aug 2016 Ting-Chun Wang, Jun-Yan Zhu, Ebi Hiroaki, Manmohan Chandraker, Alexei A. Efros, Ravi Ramamoorthi

We introduce a new light-field dataset of materials, and take advantage of the recent success of deep learning to perform material recognition on the 4D light-field.

Image Classification Image Segmentation +4

Depth From Semi-Calibrated Stereo and Defocus

no code implementations CVPR 2016 Ting-Chun Wang, Manohar Srikanth, Ravi Ramamoorthi

In this work, we propose a multi-camera system where we combine a main high-quality camera with two low-res auxiliary cameras.

Stereo Matching Stereo Matching Hand

Oriented Light-Field Windows for Scene Flow

no code implementations ICCV 2015 Pratul P. Srinivasan, Michael W. Tao, Ren Ng, Ravi Ramamoorthi

2D spatial image windows are used for comparing pixel values in computer vision applications such as correspondence for optical flow and 3D reconstruction, bilateral filtering, and image segmentation.

3D Reconstruction Image Segmentation +3

Occlusion-Aware Depth Estimation Using Light-Field Cameras

no code implementations ICCV 2015 Ting-Chun Wang, Alexei A. Efros, Ravi Ramamoorthi

In this paper, we develop a depth estimation algorithm that treats occlusion explicitly; the method also enables identification of occlusion edges, which may be useful in other applications.

Depth Estimation

What Object Motion Reveals about Shape with Unknown BRDF and Lighting

no code implementations CVPR 2013 Manmohan Chandraker, Dikpal Reddy, Yizhou Wang, Ravi Ramamoorthi

Under orthographic projection, we prove that three differential motions suffice to yield an invariant that relates shape to image derivatives, regardless of BRDF and illumination.

Surface Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.