no code implementations • 19 Feb 2024 • Chong Zeng, Yue Dong, Pieter Peers, Youkang Kong, Hongzhi Wu, Xin Tong
To provide the content creator with fine-grained control over the lighting during image generation, we augment the text-prompt with detailed lighting information in the form of radiance hints, i. e., visualizations of the scene geometry with a homogeneous canonical material under the target lighting.
no code implementations • 27 Jan 2024 • Yutao Feng, Xiang Feng, Yintong Shang, Ying Jiang, Chang Yu, Zeshun Zong, Tianjia Shao, Hongzhi Wu, Kun Zhou, Chenfanfu Jiang, Yin Yang
We demonstrate the feasibility of integrating physics-based animations of solids and fluids with 3D Gaussian Splatting (3DGS) to create novel effects in virtual scenes reconstructed using 3DGS.
no code implementations • 9 Oct 2023 • Keyang Ye, Hongzhi Wu, Xin Tong, Kun Zhou
We present the first real-time method for inserting a rigid virtual object into a neural radiance field, which produces realistic lighting and shadowing effects, as well as allows interactive manipulation of the object.
1 code implementation • 25 Aug 2023 • Chong Zeng, Guojun Chen, Yue Dong, Pieter Peers, Hongzhi Wu, Xin Tong
This paper presents a novel neural implicit radiance representation for free viewpoint relighting from a small set of unstructured photographs of an object lit by a moving point light source different from the view position.
no code implementations • 7 Aug 2023 • Xiang Feng, Kaizhang Kang, Fan Pei, Huakeng Ding, Jinjiang You, Ping Tan, Kun Zhou, Hongzhi Wu
We propose a novel framework to automatically learn to aggregate and transform photometric measurements from multiple unstructured views into spatially distinctive and view-invariant low-level features, which are fed to a multi-view stereo method to enhance 3D reconstruction.
no code implementations • CVPR 2023 • Xianmin Xu, Yuxin Lin, Haoyang Zhou, Chong Zeng, Yaxin Yu, Kun Zhou, Hongzhi Wu
We propose a unified structured light, consisting of an LED array and an LCD mask, for high-quality acquisition of both shape and reflectance from a single view.
no code implementations • 29 Mar 2022 • Xiaohe Ma, Yaxin Yu, Hongzhi Wu, Kun Zhou
A common, pre-trained latent transform module is also appended to each decoder, to offset the burden of the increased number of decoders.
no code implementations • 16 Mar 2022 • Kaizhang Kang, Chong Zeng, Hongzhi Wu, Kun Zhou
We present a novel framework to automatically learn to transform the differential cues from a stack of images densely captured with a rotational motion into spatially discriminative and view-invariant per-pixel features at each view.
no code implementations • 23 Dec 2021 • Guangming Yao, Hongzhi Wu, Yi Yuan, Lincheng Li, Kun Zhou, Xin Yu
In this paper, we present a novel double diffusion based neural radiance field, dubbed DD-NeRF, to reconstruct human body geometry and render the human body appearance in novel views from a sparse set of images.
no code implementations • ICCV 2021 • Kaizhang Kang, Cihui Xie, Ruisheng Zhu, Xiaohe Ma, Ping Tan, Hongzhi Wu, Kun Zhou
We present a novel framework to learn to convert the perpixel photometric information at each view into spatially distinctive and view-invariant low-level features, which can be plugged into existing multi-view stereo pipeline for enhanced 3D reconstruction.
no code implementations • 15 Aug 2016 • Elena Garces, Jose I. Echevarria, Wen Zhang, Hongzhi Wu, Kun Zhou, Diego Gutierrez
We present a method to automatically decompose a light field into its intrinsic shading and albedo components.