no code implementations • ECCV 2020 • Jaesung Rim, Haeyun Lee, Jucheol Won, Sunghyun Cho
To collect our dataset, we build an image acquisition system to simultaneously capture geometrically aligned pairs of blurred and sharp images, and develop a postprocessing method to produce high-quality ground truth images.
no code implementations • 21 Apr 2024 • Haechan Lee, Wonjoon Jin, Seung-Hwan Baek, Sunghyun Cho
In this paper, we propose the first generalizable view synthesis approach that specifically targets multi-view stereo-camera images.
no code implementations • 1 Apr 2024 • Heemin Yang, Jaesung Rim, Seungyong Lee, Seung-Hwan Baek, Sunghyun Cho
To handle gyro error, GyroDeblurNet is equipped with two novel neural network blocks: a gyro refinement block and a gyro deblurring block.
no code implementations • 1 Apr 2024 • Hyeongmin Lee, Kyoungkook Kang, Jungseul Ok, Sunghyun Cho
Recent image tone adjustment (or enhancement) approaches have predominantly adopted supervised learning for learning human-centric perceptual assessment.
no code implementations • 31 Jan 2024 • Geonung Kim, Beomsu Kim, Eunhyeok Park, Sunghyun Cho
As recent advancements in large-scale Text-to-Image (T2I) diffusion models have yielded remarkable high-quality image generation, diverse downstream Image-to-Image (I2I) applications have emerged.
no code implementations • 31 Dec 2023 • Hwayoon Lee, Kyoungkook Kang, Hyeongmin Lee, Seung-Hwan Baek, Sunghyun Cho
UGPNet first restores the image structure of a degraded input using a regression model and synthesizes a perceptually-realistic image with a generative model on top of the regressed output.
1 code implementation • 20 Dec 2023 • Woohyeok Kim, GeonU Kim, Junyong Lee, Seungyong Lee, Seung-Hwan Baek, Sunghyun Cho
RAW images are rarely shared mainly due to its excessive data size compared to their sRGB counterparts obtained by camera ISPs.
no code implementations • 20 Dec 2023 • Jaesung Rim, Junyong Lee, Heemin Yang, Sunghyun Cho
We simultaneously capture a long exposure wide-angle image and ultra-wide burst images from a smartphone, and use the sharp burst to estimate blur kernels for deblurring the wide-angle image.
no code implementations • 19 Sep 2023 • Nuri Ryu, Minsu Gong, Geonung Kim, Joo-Haeng Lee, Sunghyun Cho
We introduce POP3D, a novel framework that creates a full $360^\circ$-view 3D model from a single image.
no code implementations • 19 Sep 2023 • Kyungmin Jo, Wonjoon Jin, Jaegul Choo, Hyunjoon Lee, Sunghyun Cho
In this paper, we propose SideGAN, a novel 3D GAN training method to generate photo-realistic images irrespective of the camera pose, especially for faces of side-view angles.
1 code implementation • ICCV 2023 • Dongwoo Lee, Jeongtaek Oh, Jaesung Rim, Sunghyun Cho, Kyoung Mu Lee
We minimize the photo-consistency loss on blurred image space and obtain the sharp radiance fields with camera trajectories that explain the blur of all images.
no code implementations • 21 Jun 2023 • Youngchan Kim, Wonjoon Jin, Sunghyun Cho, Seung-Hwan Baek
Here, we propose to model spectro-polarimetric fields, the spatial Stokes-vector distribution of any light ray at an arbitrary wavelength.
no code implementations • 15 Jun 2023 • Kwonhyung Lee, Yejin Lim, Sunghyun Cho
Analyzing the dynamics between variables at SPNE state, the attained stylized facts are what as follows; [1]Host nation's skill-relevance and wage differential have positive correlation.
no code implementations • 14 Jun 2023 • Kwonhyung Lee, Yejin Lim, Sunghyun Cho
This study expands upon the foundation of 'Skill-Relevance-Self Selection' model on labor immigration, introduced by our previous study (Lee, Lim, & Cho, 2022).
1 code implementation • CVPR 2023 • Sohyun Lee, Jaesung Rim, Boseung Jeong, GeonU Kim, Byungju Woo, Haechan Lee, Sunghyun Cho, Suha Kwak
We study human pose estimation in extremely low-light images.
no code implementations • ICCV 2023 • Kyungmin Jo, Wonjoon Jin, Jaegul Choo, Hyunjoon Lee, Sunghyun Cho
In this paper, we propose SideGAN, a novel 3D GAN training method to generate photo-realistic images irrespective of the camera pose, especially for faces of side-view angles.
no code implementations • 30 Nov 2022 • Wonjoon Jin, Nuri Ryu, Geonung Kim, Seung-Hwan Baek, Sunghyun Cho
To tackle this, we present Dr. 3D, a novel adaptation approach that adapts an existing 3D GAN to artistic drawings.
1 code implementation • 26 Nov 2022 • Seongtae Kim, Kyoungkook Kang, Geonung Kim, Seung-Hwan Baek, Sunghyun Cho
In this paper, we propose DynaGAN, a novel few-shot domain-adaptation method for multiple target domains.
1 code implementation • 20 Jul 2022 • Geonung Kim, Kyoungkook Kang, Seongtae Kim, Hwayoon Lee, Sehoon Kim, Jonghyun Kim, Seung-Hwan Baek, Sunghyun Cho
In this paper, we propose BigColor, a novel colorization approach that provides vivid colorization for diverse in-the-wild images with complex structures.
2 code implementations • 15 Jul 2022 • Seyoung Ahn, Soohyeong Kim, Yongseok Kwon, Joohan Park, Jiseung Youn, Sunghyun Cho
To address the aforementioned challenge, we propose a novel diffusion strategy of the machine learning (ML) model (FedDif) to maximize the FL performance with non-IID data.
1 code implementation • 25 May 2022 • Hyeongseok Son, Junyong Lee, Sunghyun Cho, Seungyong Lee
While motion compensation greatly improves video deblurring quality, separately performing motion compensation and video deblurring demands huge computational overhead.
1 code implementation • CVPR 2022 • Junyong Lee, Myeonghee Lee, Sunghyun Cho, Seungyong Lee
To facilitate the fusion and propagation of temporal reference features, we propose a propagative temporal fusion module.
Ranked #1 on Reference-based Video Super-Resolution on RealMCVSR
Reference-based Video Super-Resolution Video Super-Resolution
1 code implementation • 19 Feb 2022 • Kiyeon Kim, Seungyong Lee, Sunghyun Cho
Based on the analysis, we propose Multi-Scale-Stage Network (MSSNet), a novel deep learning-based approach to single image deblurring that adopts our remedies to the defects.
Ranked #4 on Deblurring on RealBlur-R
1 code implementation • 17 Feb 2022 • Jaesung Rim, Geonung Kim, Jungeon Kim, Junyong Lee, Seungyong Lee, Sunghyun Cho
To this end, we present RSBlur, a novel dataset with real blurred images and the corresponding sharp image sequences to enable a detailed analysis of the difference between real and synthetic blur.
Ranked #1 on Deblurring on RSBlur (trained on synthetic)
no code implementations • CVPR 2022 • Jaebong Jeong, Janghun Jo, Sunghyun Cho, Jaesik Park
Our approach takes a 3D scene with semantic class labels as input and trains a 3D scene painting network that synthesizes color values for the input 3D scene.
1 code implementation • ICCV 2021 • Jinwoo Lee, Hyunsung Go, Hyunjoon Lee, Sunghyun Cho, Minhyuk Sung, Junho Kim
In this work, we propose Camera calibration TRansformer with Line-Classification (CTRL-C), an end-to-end neural network-based approach to single image camera calibration, which directly estimates the camera parameters from an image and a set of line segments.
1 code implementation • CVPR 2021 • Junyong Lee, Hyeongseok Son, Jaesung Rim, Sunghyun Cho, Seungyong Lee
We propose a novel end-to-end learning-based approach for single image defocus deblurring.
Ranked #3 on Image Defocus Deblurring on RealDOF
2 code implementations • 23 Aug 2021 • Hyeongseok Son, Junyong Lee, Jonghyeop Lee, Sunghyun Cho, Seungyong Lee
To alleviate this problem, we propose two novel approaches to deblur videos by effectively aggregating information from multiple video frames.
no code implementations • 23 Aug 2021 • Jaebong Jeong, Janghun Jo, Jingdong Wang, Sunghyun Cho, Jaesik Park
Our approach takes a 3D scene with semantic class labels as input and trains a 3D scene painting network that synthesizes color values for the input 3D scene.
1 code implementation • ICCV 2021 • Kyoungkook Kang, Seongtae Kim, Sunghyun Cho
For successful semantic editing of real images, it is critical for a GAN inversion method to find an in-domain latent code that aligns with the domain of a pre-trained GAN model.
1 code implementation • ICCV 2021 • Hyeongseok Son, Junyong Lee, Sunghyun Cho, Seungyong Lee
To utilize the property with inverse kernels, we exploit the observation that when only the size of a defocus blur changes while keeping the shape, the shape of the corresponding inverse kernel remains the same and only the scale changes.
Ranked #8 on Image Defocus Deblurring on DPD
1 code implementation • CVPR 2021 • Seunghun Lee, Sunghyun Cho, Sunghoon Im
Our model encodes individual representations of content (scene structure) and style (artistic appearance) from both source and target images.
Ranked #1 on Domain Adaptation on MNIST-to-MNIST-M
1 code implementation • The Visual Computer 2020 • Junyong Lee, Hyeongseok Son, GunHee Lee, Jonghyeop Lee, Sunghyun Cho, Seungyong Lee
We propose a novel approach to transferring the color of a reference image to a given source image.
1 code implementation • 17 Jul 2020 • Taeyoung Son, Juwon Kang, Namyup Kim, Sunghyun Cho, Suha Kwak
Despite the great advances in visual recognition, it has been witnessed that recognition models trained on clean images of common datasets are not robust against distorted images in the real world.
4 code implementations • CVPR 2019 • Jiwoon Ahn, Sunghyun Cho, Suha Kwak
For generating the pseudo labels, we first identify confident seed areas of object classes from attention maps of an image classification model, and propagate them to discover the entire instance areas with accurate boundaries.
Image Classification Image-level Supervised Instance Segmentation +2
no code implementations • ECCV 2018 • Seong-Jin Park, Hyeongseok Son, Sunghyun Cho, Ki-Sang Hong, Seungyong Lee
Generative adversarial networks (GANs) have recently been adopted to single image super resolution (SISR) and showed impressive results with realistically synthesized high-frequency textures.
no code implementations • 8 Feb 2018 • Jeehyeong Kim, Joohan Park, Jaewon Noh, Sunghyun Cho
For Device-to-device (D2D) communication of Internet-of-Things (IoT) enabled 5G system, there is a limit to allocating resources considering a complicated interference between different links in a centralized manner.
no code implementations • ICCV 2017 • Sunghyun Cho, Seungyong Lee
One popular approach for blind deconvolution is to formulate a maximum a posteriori (MAP) problem with sparsity priors on the gradients of the latent image, and then alternatingly estimate the blur kernel and the latent image.
no code implementations • CVPR 2014 • Zhe Hu, Sunghyun Cho, Jue Wang, Ming-Hsuan Yang
Images taken in low-light conditions with handheld cameras are often blurry due to the required long exposure time.
Ranked #11 on Deblurring on RealBlur-R (trained on GoPro)
no code implementations • CVPR 2013 • Lin Zhong, Sunghyun Cho, Dimitris Metaxas, Sylvain Paris, Jue Wang
Based on this observation, our method applies a series of directional filters at different orientations to the input image, and estimates an accurate Radon transform of the blur kernel from each filtered image.