Search Results for author: Changil Kim

Found 23 papers, 9 papers with code

IRIS: Inverse Rendering of Indoor Scenes from Low Dynamic Range Images

no code implementations23 Jan 2024 Zhi-Hao Lin, Jia-Bin Huang, Zhengqin Li, Zhao Dong, Christian Richardt, Tuotuo Li, Michael Zollhöfer, Johannes Kopf, Shenlong Wang, Changil Kim

While numerous 3D reconstruction and novel-view synthesis methods allow for photorealistic rendering of a scene from multi-view images easily captured with consumer cameras, they bake illumination in their representations and fall short of supporting advanced applications like material editing, relighting, and virtual object insertion.

3D Reconstruction Inverse Rendering +1

TextureDreamer: Image-guided Texture Synthesis through Geometry-aware Diffusion

no code implementations17 Jan 2024 Yu-Ying Yeh, Jia-Bin Huang, Changil Kim, Lei Xiao, Thu Nguyen-Phuoc, Numair Khan, Cheng Zhang, Manmohan Chandraker, Carl S Marshall, Zhao Dong, Zhengqin Li

In contrast, TextureDreamer can transfer highly detailed, intricate textures from real-world environments to arbitrary objects with only a few casually captured images, potentially significantly democratizing texture creation.

Texture Synthesis

SpecNeRF: Gaussian Directional Encoding for Specular Reflections

no code implementations20 Dec 2023 Li Ma, Vasu Agrawal, Haithem Turki, Changil Kim, Chen Gao, Pedro Sander, Michael Zollhöfer, Christian Richardt

We show that our Gaussian directional encoding and geometry prior significantly improve the modeling of challenging specular reflections in neural radiance fields, which helps decompose appearance into more physically meaningful components.

Single-Image 3D Human Digitization with Shape-Guided Diffusion

no code implementations15 Nov 2023 Badour AlBahar, Shunsuke Saito, Hung-Yu Tseng, Changil Kim, Johannes Kopf, Jia-Bin Huang

We present an approach to generate a 360-degree view of a person with a consistent, high-resolution appearance from a single input image.

Image Generation Inverse Rendering

VR-NeRF: High-Fidelity Virtualized Walkable Spaces

no code implementations5 Nov 2023 Linning Xu, Vasu Agrawal, William Laney, Tony Garcia, Aayush Bansal, Changil Kim, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Aljaž Božič, Dahua Lin, Michael Zollhöfer, Christian Richardt

We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields.

2k

OmnimatteRF: Robust Omnimatte with 3D Background Modeling

1 code implementation ICCV 2023 Geng Lin, Chen Gao, Jia-Bin Huang, Changil Kim, Yipeng Wang, Matthias Zwicker, Ayush Saraf

Video matting has broad applications, from adding interesting effects to casually captured movies to assisting video production professionals.

Image Matting Video Matting

Consistent View Synthesis with Pose-Guided Diffusion Models

no code implementations CVPR 2023 Hung-Yu Tseng, Qinbo Li, Changil Kim, Suhib Alsisan, Jia-Bin Huang, Johannes Kopf

In this work, we propose a pose-guided diffusion model to generate a consistent long-term video of novel views from a single image.

Novel View Synthesis

Robust Dynamic Radiance Fields

1 code implementation CVPR 2023 Yu-Lun Liu, Chen Gao, Andreas Meuleman, Hung-Yu Tseng, Ayush Saraf, Changil Kim, Yung-Yu Chuang, Johannes Kopf, Jia-Bin Huang

Dynamic radiance field reconstruction methods aim to model the time-varying structure and appearance of a dynamic scene.

AMICO: Amodal Instance Composition

no code implementations11 Oct 2022 Peiye Zhuang, Jia-Bin Huang, Ayush Saraf, Xuejian Rong, Changil Kim, Denis Demandolx

Image composition aims to blend multiple objects to form a harmonized image.

Object

Learning Neural Light Fields With Ray-Space Embedding

no code implementations CVPR 2022 Benjamin Attal, Jia-Bin Huang, Michael Zollhöfer, Johannes Kopf, Changil Kim

Our method supports rendering with a single network evaluation per pixel for small baseline light fields and with only a few evaluations per pixel for light fields with larger baselines.

Boosting View Synthesis With Residual Transfer

no code implementations CVPR 2022 Xuejian Rong, Jia-Bin Huang, Ayush Saraf, Changil Kim, Johannes Kopf

We present a simple but effective technique to boost the rendering quality, which can be easily integrated with most view synthesis methods.

Novel View Synthesis

Learning Neural Light Fields with Ray-Space Embedding Networks

1 code implementation2 Dec 2021 Benjamin Attal, Jia-Bin Huang, Michael Zollhoefer, Johannes Kopf, Changil Kim

Our method supports rendering with a single network evaluation per pixel for small baseline light field datasets and can also be applied to larger baselines with only a few evaluations per pixel.

Neural 3D Video Synthesis from Multi-view Video

1 code implementation CVPR 2022 Tianye Li, Mira Slavcheva, Michael Zollhoefer, Simon Green, Christoph Lassner, Changil Kim, Tanner Schmidt, Steven Lovegrove, Michael Goesele, Richard Newcombe, Zhaoyang Lv

We propose a novel approach for 3D video synthesis that is able to represent multi-view video recordings of a dynamic real-world scene in a compact, yet expressive representation that enables high-quality view synthesis and motion interpolation.

Motion Interpolation

Space-time Neural Irradiance Fields for Free-Viewpoint Video

no code implementations CVPR 2021 Wenqi Xian, Jia-Bin Huang, Johannes Kopf, Changil Kim

We present a method that learns a spatiotemporal neural irradiance field for dynamic scenes from a single video.

Depth Estimation

A Dataset of Flash and Ambient Illumination Pairs from the Crowd

no code implementations ECCV 2018 Yagiz Aksoy, Changil Kim, Petr Kellnhofer, Sylvain Paris, Mohamed Elgharib, Marc Pollefeys, Wojciech Matusik

We present a dataset of thousands of ambient and flash illumination pairs to enable studying flash photography and other applications that can benefit from having separate illuminations.

On Learning Associations of Faces and Voices

1 code implementation15 May 2018 Changil Kim, Hijung Valentina Shin, Tae-Hyun Oh, Alexandre Kaspar, Mohamed Elgharib, Wojciech Matusik

We computationally model the overlapping information between faces and voices and show that the learned cross-modal representation contains enough information to identify matching faces and voices with performance similar to that of humans.

Speaker Identification

Learning-based Video Motion Magnification

2 code implementations ECCV 2018 Tae-Hyun Oh, Ronnachai Jaroensri, Changil Kim, Mohamed Elgharib, Frédo Durand, William T. Freeman, Wojciech Matusik

We show that the learned filters achieve high-quality results on real videos, with less ringing artifacts and better noise characteristics than previous methods.

Motion Magnification

Cannot find the paper you are looking for? You can Submit a new open access paper.