Search Results for author: Chong Zeng

Found 6 papers, 2 papers with code

DiLightNet: Fine-grained Lighting Control for Diffusion-based Image Generation

no code implementations19 Feb 2024 Chong Zeng, Yue Dong, Pieter Peers, Youkang Kong, Hongzhi Wu, Xin Tong

To provide the content creator with fine-grained control over the lighting during image generation, we augment the text-prompt with detailed lighting information in the form of radiance hints, i. e., visualizations of the scene geometry with a homogeneous canonical material under the target lighting.

Image Generation

One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion

no code implementations14 Nov 2023 Minghua Liu, Ruoxi Shi, Linghao Chen, Zhuoyang Zhang, Chao Xu, Xinyue Wei, Hansheng Chen, Chong Zeng, Jiayuan Gu, Hao Su

Recent advancements in open-world 3D object generation have been remarkable, with image-to-3D methods offering superior fine-grained control over their text-to-3D counterparts.

Image Generation Image to 3D +1

Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model

1 code implementation23 Oct 2023 Ruoxi Shi, Hansheng Chen, Zhuoyang Zhang, Minghua Liu, Chao Xu, Xinyue Wei, Linghao Chen, Chong Zeng, Hao Su

We report Zero123++, an image-conditioned diffusion model for generating 3D-consistent multi-view images from a single input view.

Relighting Neural Radiance Fields with Shadow and Highlight Hints

1 code implementation25 Aug 2023 Chong Zeng, Guojun Chen, Yue Dong, Pieter Peers, Hongzhi Wu, Xin Tong

This paper presents a novel neural implicit radiance representation for free viewpoint relighting from a small set of unstructured photographs of an object lit by a moving point light source different from the view position.

Position

A Unified Spatial-Angular Structured Light for Single-View Acquisition of Shape and Reflectance

no code implementations CVPR 2023 Xianmin Xu, Yuxin Lin, Haoyang Zhou, Chong Zeng, Yaxin Yu, Kun Zhou, Hongzhi Wu

We propose a unified structured light, consisting of an LED array and an LCD mask, for high-quality acquisition of both shape and reflectance from a single view.

DiFT: Differentiable Differential Feature Transform for Multi-View Stereo

no code implementations16 Mar 2022 Kaizhang Kang, Chong Zeng, Hongzhi Wu, Kun Zhou

We present a novel framework to automatically learn to transform the differential cues from a stack of images densely captured with a rotational motion into spatially discriminative and view-invariant per-pixel features at each view.

3D Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.