no code implementations • 4 Apr 2024 • Hyomin Kim, Yucheol Jung, Seungyong Lee
Using the auxiliary edges, we design a novel algorithm to optimize the discontinuity and the depth map from the input normal map.
no code implementations • 2 Apr 2024 • Chaerin Kong, Seungyong Lee, Soohyeok Im, Wonsuk Yang
Image editing has been a long-standing challenge in the research community with its far-reaching impact on numerous applications.
no code implementations • 1 Apr 2024 • Heemin Yang, Jaesung Rim, Seungyong Lee, Seung-Hwan Baek, Sunghyun Cho
To handle gyro error, GyroDeblurNet is equipped with two novel neural network blocks: a gyro refinement block and a gyro deblurring block.
1 code implementation • 20 Dec 2023 • Woohyeok Kim, GeonU Kim, Junyong Lee, Seungyong Lee, Seung-Hwan Baek, Sunghyun Cho
RAW images are rarely shared mainly due to its excessive data size compared to their sRGB counterparts obtained by camera ISPs.
1 code implementation • 30 Jul 2023 • Yucheol Jung, Hyomin Kim, Gyeongha Hwang, Seung-Hwan Baek, Seungyong Lee
In 3D shape reconstruction based on template mesh deformation, a regularization, such as smoothness energy, is employed to guide the reconstruction into a desirable direction.
no code implementations • 23 Jun 2023 • Seokjun Choi, Seungwoo Yoon, Giljoo Nam, Seungyong Lee, Seung-Hwan Baek
In this paper, we present differentiable display photometric stereo (DDPS), addressing an often overlooked challenge in display photometric stereo: the design of display patterns.
1 code implementation • 29 Jul 2022 • Yucheol Jung, Wonjong Jang, Soongjin Kim, Jiaolong Yang, Xin Tong, Seungyong Lee
To achieve the goal, we propose an MLP-based framework for building a deformable surface model, which takes a latent code and produces a 3D surface.
1 code implementation • 25 May 2022 • Hyeongseok Son, Junyong Lee, Sunghyun Cho, Seungyong Lee
While motion compensation greatly improves video deblurring quality, separately performing motion compensation and video deblurring demands huge computational overhead.
1 code implementation • CVPR 2022 • Junyong Lee, Myeonghee Lee, Sunghyun Cho, Seungyong Lee
To facilitate the fusion and propagation of temporal reference features, we propose a propagative temporal fusion module.
Ranked #1 on Reference-based Video Super-Resolution on RealMCVSR
Reference-based Video Super-Resolution Video Super-Resolution
1 code implementation • 19 Feb 2022 • Kiyeon Kim, Seungyong Lee, Sunghyun Cho
Based on the analysis, we propose Multi-Scale-Stage Network (MSSNet), a novel deep learning-based approach to single image deblurring that adopts our remedies to the defects.
Ranked #4 on Deblurring on RealBlur-R
1 code implementation • 17 Feb 2022 • Jaesung Rim, Geonung Kim, Jungeon Kim, Junyong Lee, Seungyong Lee, Sunghyun Cho
To this end, we present RSBlur, a novel dataset with real blurred images and the corresponding sharp image sequences to enable a detailed analysis of the difference between real and synthetic blur.
Ranked #1 on Deblurring on RSBlur (trained on synthetic)
1 code implementation • CVPR 2021 • Junyong Lee, Hyeongseok Son, Jaesung Rim, Sunghyun Cho, Seungyong Lee
We propose a novel end-to-end learning-based approach for single image defocus deblurring.
Ranked #3 on Image Defocus Deblurring on RealDOF
2 code implementations • 23 Aug 2021 • Hyeongseok Son, Junyong Lee, Jonghyeop Lee, Sunghyun Cho, Seungyong Lee
To alleviate this problem, we propose two novel approaches to deblur videos by effectively aggregating information from multiple video frames.
1 code implementation • ICCV 2021 • Hyomin Kim, Jungeon Kim, Jaewon Kam, Jaesik Park, Seungyong Lee
We propose deep virtual markers, a framework for estimating dense and accurate positional information for various types of 3D data.
1 code implementation • ICCV 2021 • Hyeongseok Son, Junyong Lee, Sunghyun Cho, Seungyong Lee
To utilize the property with inverse kernels, we exploit the observation that when only the size of a defocus blur changes while keeping the shape, the shape of the corresponding inverse kernel remains the same and only the scale changes.
Ranked #8 on Image Defocus Deblurring on DPD
no code implementations • 20 Aug 2021 • Hyomin Kim, Jungeon Kim, Hyeonseo Nam, Jaesik Park, Seungyong Lee
This paper presents an effective method for generating a spatiotemporal (time-varying) texture map for a dynamic object using a single RGB-D camera.
1 code implementation • 9 Jul 2021 • Wonjong Jang, Gwangjin Ju, Yucheol Jung, Jiaolong Yang, Xin Tong, Seungyong Lee
Our framework, dubbed StyleCariGAN, automatically creates a realistic and detailed caricature from an input photo with optional controls on shape exaggeration degree and color stylization type.
1 code implementation • The Visual Computer 2020 • Junyong Lee, Hyeongseok Son, GunHee Lee, Jonghyeop Lee, Sunghyun Cho, Seungyong Lee
We propose a novel approach to transferring the color of a reference image to a given source image.
no code implementations • 1 Sep 2020 • Paul L. Rosin, Yu-Kun Lai, David Mould, Ran Yi, Itamar Berger, Lars Doyle, Seungyong Lee, Chuan Li, Yong-Jin Liu, Amir Semmo, Ariel Shamir, Minjung Son, Holger Winnemoller
Despite the recent upsurge of activity in image-based non-photorealistic rendering (NPR), and in particular portrait image stylisation, due to the advent of neural style transfer, the state of performance evaluation in this field is limited, especially compared to the norms in the computer vision and machine learning communities.
no code implementations • ECCV 2018 • Junho Jeon, Seungyong Lee
Raw depth images captured by consumer depth cameras suffer from noisy and missing values.
no code implementations • ECCV 2018 • Seong-Jin Park, Hyeongseok Son, Sunghyun Cho, Ki-Sang Hong, Seungyong Lee
Generative adversarial networks (GANs) have recently been adopted to single image super resolution (SISR) and showed impressive results with realistically synthesized high-frequency textures.
no code implementations • ICCV 2017 • Seong-Jin Park, Ki-Sang Hong, Seungyong Lee
Feature fusion blocks learn residual RGB and depth features and their combinations to fully exploit the complementary characteristics of RGB and depth data.
Ranked #27 on Semantic Segmentation on SUN-RGBD (using extra training data)
no code implementations • ICCV 2017 • Sunghyun Cho, Seungyong Lee
One popular approach for blind deconvolution is to formulate a maximum a posteriori (MAP) problem with sparsity priors on the gradients of the latent image, and then alternatingly estimate the blur kernel and the latent image.