Search Results for author: Gyeongmin Choe

Found 8 papers, 2 papers with code

FSID: Fully Synthetic Image Denoising via Procedural Scene Generation

1 code implementation7 Dec 2022 Gyeongmin Choe, Beibei Du, Seonghyeon Nam, Xiaoyu Xiang, Bo Zhu, Rakesh Ranjan

To address this, we have developed a procedural synthetic data generation pipeline and dataset tailored to low-level vision tasks.

Image Denoising Scene Generation +1

Consistent Direct Time-of-Flight Video Depth Super-Resolution

1 code implementation CVPR 2023 Zhanghao Sun, Wei Ye, Jinhui Xiong, Gyeongmin Choe, Jialiang Wang, Shuochen Su, Rakesh Ranjan

We believe the methods and dataset are beneficial to a broad community as dToF depth sensing is becoming mainstream on mobile devices.

Super-Resolution

Refining Geometry from Depth Sensors using IR Shading Images

no code implementations18 Aug 2016 Gyeongmin Choe, Jaesik Park, Yu-Wing Tai, In So Kweon

To resolve the ambiguity in our model between the normals and distances, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not captured and reconstructed by the Kinect fusion.

Simultaneous Estimation of Near IR BRDF and Fine-Scale Surface Geometry

no code implementations CVPR 2016 Gyeongmin Choe, Srinivasa G. Narasimhan, In So Kweon

Near-Infrared (NIR) images of most materials exhibit less texture or albedo variations making them beneficial for vision tasks such as intrinsic image decomposition and structured light depth estimation.

Depth Estimation Intrinsic Image Decomposition +1

High Quality Structure From Small Motion for Rolling Shutter Cameras

no code implementations ICCV 2015 Sunghoon Im, Hyowon Ha, Gyeongmin Choe, Hae-Gon Jeon, Kyungdon Joo, In So Kweon

To address these problems, we introduce a novel 3D reconstruction method from narrow-baseline image sequences that effectively handles the effects of a rolling shutter that occur from most of commercial digital cameras.

3D Reconstruction Depth Estimation +1

Exploiting Shading Cues in Kinect IR Images for Geometry Refinement

no code implementations CVPR 2014 Gyeongmin Choe, Jaesik Park, Yu-Wing Tai, In So Kweon

To resolve ambiguity in our model between normals and distance, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not reconstructed by the Kinect fusion.

Cannot find the paper you are looking for? You can Submit a new open access paper.