1 code implementation • 7 Dec 2022 • Gyeongmin Choe, Beibei Du, Seonghyeon Nam, Xiaoyu Xiang, Bo Zhu, Rakesh Ranjan
To address this, we have developed a procedural synthetic data generation pipeline and dataset tailored to low-level vision tasks.
1 code implementation • CVPR 2023 • Zhanghao Sun, Wei Ye, Jinhui Xiong, Gyeongmin Choe, Jialiang Wang, Shuochen Su, Rakesh Ranjan
We believe the methods and dataset are beneficial to a broad community as dToF depth sensing is becoming mainstream on mobile devices.
no code implementations • 18 Aug 2016 • Gyeongmin Choe, Jaesik Park, Yu-Wing Tai, In So Kweon
To resolve the ambiguity in our model between the normals and distances, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not captured and reconstructed by the Kinect fusion.
no code implementations • CVPR 2016 • Gyeongmin Choe, Srinivasa G. Narasimhan, In So Kweon
Near-Infrared (NIR) images of most materials exhibit less texture or albedo variations making them beneficial for vision tasks such as intrinsic image decomposition and structured light depth estimation.
no code implementations • 24 Mar 2016 • Youngjin Yoon, Gyeongmin Choe, Namil Kim, Joon-Young Lee, In So Kweon
We present surface normal estimation using a single near infrared (NIR) image.
no code implementations • ICCV 2015 • Sunghoon Im, Hyowon Ha, Gyeongmin Choe, Hae-Gon Jeon, Kyungdon Joo, In So Kweon
To address these problems, we introduce a novel 3D reconstruction method from narrow-baseline image sequences that effectively handles the effects of a rolling shutter that occur from most of commercial digital cameras.
no code implementations • CVPR 2015 • Hae-Gon Jeon, Jaesik Park, Gyeongmin Choe, Jinsun Park, Yunsu Bok, Yu-Wing Tai, In So Kweon
This paper introduces an algorithm that accurately estimates depth maps using a lenslet light field camera.
no code implementations • CVPR 2014 • Gyeongmin Choe, Jaesik Park, Yu-Wing Tai, In So Kweon
To resolve ambiguity in our model between normals and distance, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not reconstructed by the Kinect fusion.