Search Results for author: Lam Huynh

Found 10 papers, 1 papers with code

Cascaded and Generalizable Neural Radiance Fields for Fast View Synthesis

no code implementations9 Aug 2022 Phong Nguyen-Ha, Lam Huynh, Esa Rahtu, Jiri Matas, Janne Heikkila

Moreover, our method can leverage a denser set of reference images of a single scene to produce accurate novel views without relying on additional explicit representations and still maintains the high-speed rendering of the pre-trained model.

Neural Rendering Novel View Synthesis

Lightweight Monocular Depth with a Novel Neural Architecture Search Method

no code implementations25 Aug 2021 Lam Huynh, Phong Nguyen, Jiri Matas, Esa Rahtu, Janne Heikkila

This paper presents a novel neural architecture search method, called LiDNAS, for generating lightweight monocular depth estimation models.

Monocular Depth Estimation Neural Architecture Search

Monocular Depth Estimation Primed by Salient Point Detection and Normalized Hessian Loss

no code implementations25 Aug 2021 Lam Huynh, Matteo Pedone, Phong Nguyen, Jiri Matas, Esa Rahtu, Janne Heikkila

In addition, we introduce a normalized Hessian loss term invariant to scaling and shear along the depth direction, which is shown to substantially improve the accuracy.

Monocular Depth Estimation

RGBD-Net: Predicting color and depth images for novel views synthesis

no code implementations29 Nov 2020 Phong Nguyen, Animesh Karnewar, Lam Huynh, Esa Rahtu, Jiri Matas, Janne Heikkila

We propose a new cascaded architecture for novel view synthesis, called RGBD-Net, which consists of two core components: a hierarchical depth regression network and a depth-aware generator network.

Novel View Synthesis regression

Sequential View Synthesis with Transformer

no code implementations9 Apr 2020 Phong Nguyen-Ha, Lam Huynh, Esa Rahtu, Janne Heikkila

This paper addresses the problem of novel view synthesis by means of neural rendering, where we are interested in predicting the novel view at an arbitrary camera pose based on a given set of input images from other viewpoints.

Decoder Neural Rendering +1

Guiding Monocular Depth Estimation Using Depth-Attention Volume

2 code implementations ECCV 2020 Lam Huynh, Phong Nguyen-Ha, Jiri Matas, Esa Rahtu, Janne Heikkila

Recovering the scene depth from a single image is an ill-posed problem that requires additional priors, often referred to as monocular depth cues, to disambiguate different 3D interpretations.

Monocular Depth Estimation

Predicting Novel Views Using Generative Adversarial Query Network

no code implementations10 Apr 2019 Phong Nguyen-Ha, Lam Huynh, Esa Rahtu, Janne Heikkila

The problem of predicting a novel view of the scene using an arbitrary number of observations is a challenging problem for computers as well as for humans.

Decoder Novel View Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.