Search Results for author: Zerong Zheng

Found 28 papers, 9 papers with code

RobustFusion: Human Volumetric Capture with Data-driven Visual Cues using a RGBD Camera

no code implementations ECCV 2020 Zhuo Su, Lan Xu, Zerong Zheng, Tao Yu, Yebin Liu, Lu Fang

To enable robust tracking, we embrace both the initial model and the various visual cues into a novel performance capture scheme with hybrid motion optimization and semantic volumetric fusion, which can successfully capture challenging human motions under the monocular setting without pre-scanned detailed template and owns the reinitialization ability to recover from tracking failures and the disappear-reoccur scenarios.

4D reconstruction

Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians

1 code implementation5 Dec 2023 Yuelang Xu, Benwang Chen, Zhe Li, Hongwen Zhang, Lizhen Wang, Zerong Zheng, Yebin Liu

Creating high-fidelity 3D head avatars has always been a research hotspot, but there remains a great challenge under lightweight sparse view setups.

2k

Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling

1 code implementation27 Nov 2023 Zhe Li, Zerong Zheng, Lizhen Wang, Yebin Liu

Overall, our method can create lifelike avatars with dynamic, realistic and generalized appearances.

Leveraging Intrinsic Properties for Non-Rigid Garment Alignment

no code implementations ICCV 2023 Siyou Lin, Boyao Zhou, Zerong Zheng, Hongwen Zhang, Yebin Liu

To achieve wrinkle-level as well as texture-level alignment, we present a novel coarse-to-fine two-stage method that leverages intrinsic manifold properties with two neural deformation fields, in the 3D space and the intrinsic space, respectively.

Control4D: Efficient 4D Portrait Editing with Text

no code implementations31 May 2023 Ruizhi Shao, Jingxiang Sun, Cheng Peng, Zerong Zheng, Boyao Zhou, Hongwen Zhang, Yebin Liu

We introduce Control4D, an innovative framework for editing dynamic 4D portraits using text instructions.

AvatarReX: Real-time Expressive Full-body Avatars

no code implementations8 May 2023 Zerong Zheng, Xiaochen Zhao, Hongwen Zhang, Boning Liu, Yebin Liu

We present AvatarReX, a new method for learning NeRF-based full-body avatars from video data.

Disentanglement

PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling

1 code implementation25 Apr 2023 Zhe Li, Zerong Zheng, Yuxiao Liu, Boyao Zhou, Yebin Liu

To this end, we present PoseVocab, a novel pose encoding method that encourages the network to discover the optimal pose embeddings for learning the dynamic human appearance.

CloSET: Modeling Clothed Humans on Continuous Surface with Explicit Template Decomposition

no code implementations CVPR 2023 Hongwen Zhang, Siyou Lin, Ruizhi Shao, Yuxiang Zhang, Zerong Zheng, Han Huang, Yandong Guo, Yebin Liu

In this way, the clothing deformations are disentangled such that the pose-dependent wrinkles can be better learned and applied to unseen poses.

Tensor4D: Efficient Neural 4D Decomposition for High-Fidelity Dynamic Reconstruction and Rendering

no code implementations CVPR 2023 Ruizhi Shao, Zerong Zheng, Hanzhang Tu, Boning Liu, Hongwen Zhang, Yebin Liu

The key of our solution is an efficient 4D tensor decomposition method so that the dynamic scene can be directly represented as a 4D spatio-temporal tensor.

Dynamic Reconstruction Tensor Decomposition

Tensor4D : Efficient Neural 4D Decomposition for High-fidelity Dynamic Reconstruction and Rendering

1 code implementation21 Nov 2022 Ruizhi Shao, Zerong Zheng, Hanzhang Tu, Boning Liu, Hongwen Zhang, Yebin Liu

The key of our solution is an efficient 4D tensor decomposition method so that the dynamic scene can be directly represented as a 4D spatio-temporal tensor.

Dynamic Reconstruction Tensor Decomposition

DiffuStereo: High Quality Human Reconstruction via Diffusion-based Stereo Using Sparse Cameras

no code implementations16 Jul 2022 Ruizhi Shao, Zerong Zheng, Hongwen Zhang, Jingxiang Sun, Yebin Liu

At its core is a novel diffusion-based stereo module, which introduces diffusion models, a type of powerful generative models, into the iterative stereo matching network.

3D Human Reconstruction 4k +2

Learning Implicit Templates for Point-Based Clothed Human Modeling

1 code implementation14 Jul 2022 Siyou Lin, Hongwen Zhang, Zerong Zheng, Ruizhi Shao, Yebin Liu

We present FITE, a First-Implicit-Then-Explicit framework for modeling human avatars in clothing.

AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture

1 code implementation5 Jul 2022 Zhe Li, Zerong Zheng, Hongwen Zhang, Chaonan Ji, Yebin Liu

Then given a monocular RGB video of this subject, our method integrates information from both the image observation and the avatar prior, and accordingly recon-structs high-fidelity 3D textured models with dynamic details regardless of the visibility.

ProbNVS: Fast Novel View Synthesis with Learned Probability-Guided Sampling

no code implementations7 Apr 2022 Yuemei Zhou, Tao Yu, Zerong Zheng, Ying Fu, Yebin Liu

Existing state-of-the-art novel view synthesis methods rely on either fairly accurate 3D geometry estimation or sampling of the entire space for neural volumetric rendering, which limit the overall efficiency.

Novel View Synthesis

Structured Local Radiance Fields for Human Avatar Modeling

no code implementations CVPR 2022 Zerong Zheng, Han Huang, Tao Yu, Hongwen Zhang, Yandong Guo, Yebin Liu

These local radiance fields not only leverage the flexibility of implicit representation in shape and appearance modeling, but also factorize cloth deformations into skeleton motions, node residual translations and the dynamic detail variations inside each individual radiance field.

High-Fidelity Human Avatars From a Single RGB Camera

no code implementations CVPR 2022 Hao Zhao, Jinsong Zhang, Yu-Kun Lai, Zerong Zheng, Yingdi Xie, Yebin Liu, Kun Li

To cope with the complexity of textures and generate photo-realistic results, we propose a reference-based neural rendering network and exploit a bottom-up sharpening-guided fine-tuning strategy to obtain detailed textures.

Neural Rendering Vocal Bursts Intensity Prediction

HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars

no code implementations19 Dec 2021 Tao Hu, Tao Yu, Zerong Zheng, He Zhang, Yebin Liu, Matthias Zwicker

To handle complicated motions (e. g., self-occlusions), we then leverage the encoded information on the UV manifold to construct a 3D volumetric representation based on a dynamic pose-conditioned neural radiance field.

Neural Rendering

POSEFusion: Pose-guided Selective Fusion for Single-view Human Volumetric Capture

no code implementations CVPR 2021 Zhe Li, Tao Yu, Zerong Zheng, Kaiwen Guo, Yebin Liu

By contributing a novel reconstruction framework which contains pose-guided keyframe selection and robust implicit surface fusion, our method fully utilizes the advantages of both tracking-based methods and tracking-free inference methods, and finally enables the high-fidelity reconstruction of dynamic surface details even in the invisible regions.

3D Reconstruction

Deep Implicit Templates for 3D Shape Representation

1 code implementation CVPR 2021 Zerong Zheng, Tao Yu, Qionghai Dai, Yebin Liu

Deep implicit functions (DIFs), as a kind of 3D shape representation, are becoming more and more popular in the 3D vision community due to their compactness and strong representation power.

3D Shape Representation

Vehicle Reconstruction and Texture Estimation Using Deep Implicit Semantic Template Mapping

no code implementations30 Nov 2020 Xiaochen Zhao, Zerong Zheng, Chaonan Ji, Zhenyi Liu, Siyou Lin, Tao Yu, Jinli Suo, Yebin Liu

We introduce VERTEX, an effective solution to recover 3D shape and intrinsic texture of vehicles from uncalibrated monocular input in real-world street environments.

PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction

1 code implementation8 Jul 2020 Zerong Zheng, Tao Yu, Yebin Liu, Qionghai Dai

To overcome the limitations of regular 3D representations, we propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.

3D Human Reconstruction Camera Calibration

Robust 3D Self-portraits in Seconds

no code implementations CVPR 2020 Zhe Li, Tao Yu, Chuanyu Pan, Zerong Zheng, Yebin Liu

In this paper, we propose an efficient method for robust 3D self-portraits using a single RGBD camera.

DeepHuman: 3D Human Reconstruction from a Single Image

1 code implementation ICCV 2019 Zerong Zheng, Tao Yu, Yixuan Wei, Qionghai Dai, Yebin Liu

We propose DeepHuman, an image-guided volume-to-volume translation CNN for 3D human reconstruction from a single RGB image.

3D Human Reconstruction Pose Estimation +1

SimulCap : Single-View Human Performance Capture with Cloth Simulation

no code implementations CVPR 2019 Tao Yu, Zerong Zheng, Yuan Zhong, Jianhui Zhao, Qionghai Dai, Gerard Pons-Moll, Yebin Liu

This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e. g., cloth wrinkles) using a single RGBD camera.

HybridFusion: Real-Time Performance Capture Using a Single Depth Sensor and Sparse IMUs

no code implementations ECCV 2018 Zerong Zheng, Tao Yu, Hao Li, Kaiwen Guo, Qionghai Dai, Lu Fang, Yebin Liu

We propose a light-weight and highly robust real-time human performance capture method based on a single depth camera and sparse inertial measurement units (IMUs).

Surface Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.