Search Results for author: Zipeng Ye

Found 6 papers, 2 papers with code

Dynamic Neural Textures: Generating Talking-Face Videos with Continuously Controllable Expressions

no code implementations13 Apr 2022 Zipeng Ye, Zhiyao Sun, Yu-Hui Wen, Yanan sun, Tian Lv, Ran Yi, Yong-Jin Liu

In this paper, we propose a method to generate talking-face videos with continuously controllable expressions in real-time.

Video Generation

3D-CariGAN: An End-to-End Solution to 3D Caricature Generation from Face Photos

1 code implementation15 Mar 2020 Zipeng Ye, Mengfei Xia, Yanan sun, Ran Yi, MinJing Yu, Juyong Zhang, Yu-Kun Lai, Yong-Jin Liu

The most challenging issue for our system is that the source domain of face photos (characterized by normal 2D faces) is significantly different from the target domain of 3D caricatures (characterized by 3D exaggerated face shapes and textures).

Caricature

Audio-driven Talking Face Video Generation with Learning-based Personalized Head Pose

1 code implementation24 Feb 2020 Ran Yi, Zipeng Ye, Juyong Zhang, Hujun Bao, Yong-Jin Liu

In this paper, we address this problem by proposing a deep neural network model that takes an audio signal A of a source person and a very short video V of a target person as input, and outputs a synthesized high-quality talking face video with personalized head pose (making use of the visual information in V), expression and lip synchronization (by considering both A and V).

3D Face Animation Video Generation

A Configuration-Space Decomposition Scheme for Learning-based Collision Checking

no code implementations17 Nov 2019 Yiheng Han, Wang Zhao, Jia Pan, Zipeng Ye, Ran Yi, Yong-Jin Liu

Motion planning for robots of high degrees-of-freedom (DOFs) is an important problem in robotics with sampling-based methods in configuration space C as one popular solution.

BIG-bench Machine Learning Motion Planning +1

Fast Computation of Content-Sensitive Superpixels and Supervoxels Using Q-Distances

no code implementations ICCV 2019 Zipeng Ye, Ran Yi, Minjing Yu, Yong-Jin Liu, Ying He

Our key idea is that for manifold regions in which q-distances are different from geodesic distances, GCVT is prone to placing more generators in them, and therefore after few iterations, the q-distance-induced tessellation is an exact GCVT.

Superpixels

Cannot find the paper you are looking for? You can Submit a new open access paper.