Search Results for author: Juan Luis Gonzalez Bello

Found 9 papers, 1 papers with code

From-Ground-To-Objects: Coarse-to-Fine Self-supervised Monocular Depth Estimation of Dynamic Objects with Ground Contact Prior

no code implementations15 Dec 2023 Jaeho Moon, Juan Luis Gonzalez Bello, Byeongjun Kwon, Munchurl Kim

Subsequently, in the fine training stage, we refine the DE network to learn the detailed depth of the objects from the reprojection loss, while ensuring accurate DE on the moving object regions by employing our regularization loss with a cost-volume-based weighting factor.

Monocular Depth Estimation

ProNeRF: Learning Efficient Projection-Aware Ray Sampling for Fine-Grained Implicit Neural Radiance Fields

no code implementations13 Dec 2023 Juan Luis Gonzalez Bello, Minh-Quan Viet Bui, Munchurl Kim

Recent advances in neural rendering have shown that, albeit slow, implicit compact models can learn a scene's geometries and view-dependent appearances from multiple views.

Neural Rendering

Novel View Synthesis with View-Dependent Effects from a Single Image

no code implementations13 Dec 2023 Juan Luis Gonzalez Bello, Munchurl Kim

In this paper, we firstly consider view-dependent effects into single image-based novel view synthesis (NVS) problems.

Novel View Synthesis Self-Supervised Learning

Positional Information is All You Need: A Novel Pipeline for Self-Supervised SVDE from Videos

no code implementations18 May 2022 Juan Luis Gonzalez Bello, Jaeho Moon, Munchurl Kim

Recently, much attention has been drawn to learning the underlying 3D structures of a scene from monocular videos in a fully self-supervised fashion.

Depth Estimation Quantization

PLADE-Net: Towards Pixel-Level Accuracy for Self-Supervised Single-View Depth Estimation with Neural Positional Encoding and Distilled Matting Loss

1 code implementation CVPR 2021 Juan Luis Gonzalez Bello, Munchurl Kim

Our PLADE-Net is based on a new network architecture with neural positional encoding and a novel loss function that borrows from the closed-form solution of the matting Laplacian to learn pixel-level accurate depth estimation from stereo images.

Depth Estimation Image Matting

Deep 3D Pan via Local adaptive "t-shaped" convolutions with global and local adaptive dilations

no code implementations ICLR 2020 Juan Luis Gonzalez Bello, Munchurl Kim

Our proposed network architecture, the monster-net, is devised with a novel t-shaped adaptive kernel with globally and locally adaptive dilation, which can efficiently incorporate global camera shift into and handle local 3D geometries of the target image's pixels for the synthesis of naturally looking 3D panned views when a 2-D input image is given.

Monocular Depth Estimation SSIM +1

Deep 3D Pan via adaptive "t-shaped" convolutions with global and local adaptive dilations

no code implementations2 Oct 2019 Juan Luis Gonzalez Bello, Munchurl Kim

Our proposed network architecture, the monster-net, is devised with a novel "t-shaped" adaptive kernel with globally and locally adaptive dilation, which can efficiently incorporate global camera shift into and handle local 3D geometries of the target image's pixels for the synthesis of naturally looking 3D panned views when a 2-D input image is given.

Monocular Depth Estimation SSIM +1

Deep 3D-Zoom Net: Unsupervised Learning of Photo-Realistic 3D-Zoom

no code implementations20 Sep 2019 Juan Luis Gonzalez Bello, Munchurl Kim

The 3D-zoom operation is the positive translation of the camera in the Z-axis, perpendicular to the image plane.

Disparity Estimation Novel View Synthesis +1

A Novel Monocular Disparity Estimation Network with Domain Transformation and Ambiguity Learning

no code implementations20 Mar 2019 Juan Luis Gonzalez Bello, Munchurl Kim

Convolutional neural networks (CNN) have shown state-of-the-art results for low-level computer vision problems such as stereo and monocular disparity estimations, but still, have much room to further improve their performance in terms of accuracy, numbers of parameters, etc.

Decoder Disparity Estimation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.