Search Results for author: Jiashun Wang

Found 9 papers, 5 papers with code

Zero-shot Pose Transfer for Unrigged Stylized 3D Characters

1 code implementation CVPR 2023 Jiashun Wang, Xueting Li, Sifei Liu, Shalini De Mello, Orazio Gallo, Xiaolong Wang, Jan Kautz

We present a zero-shot approach that requires only the widely available deformed non-stylized avatars in training, and deforms stylized characters of significantly different shapes at inference.

Pose Transfer

ContactArt: Learning 3D Interaction Priors for Category-level Articulated Object and Hand Poses Estimation

no code implementations2 May 2023 Zehao Zhu, Jiashun Wang, Yuzhe Qin, Deqing Sun, Varun Jampani, Xiaolong Wang

We propose a new dataset and a novel approach to learning hand-object interaction priors for hand and articulated object pose estimation.

Hand Pose Estimation Object

USEEK: Unsupervised SE(3)-Equivariant 3D Keypoints for Generalizable Manipulation

no code implementations28 Sep 2022 Zhengrong Xue, Zhecheng Yuan, Jiashun Wang, Xueqian Wang, Yang Gao, Huazhe Xu

Can a robot manipulate intra-category unseen objects in arbitrary poses with the help of a mere demonstration of grasping pose on a single object instance?

Keypoint Detection Object

Learning Continuous Grasping Function with a Dexterous Hand from Human Demonstrations

1 code implementation11 Jul 2022 Jianglong Ye, Jiashun Wang, Binghao Huang, Yuzhe Qin, Xiaolong Wang

We will first convert the large-scale human-object interaction trajectories to robot demonstrations via motion retargeting, and then use these demonstrations to train CGF.

Human-Object Interaction Detection motion retargeting

Learning Generalizable Dexterous Manipulation from Human Grasp Affordance

no code implementations5 Apr 2022 Yueh-Hua Wu, Jiashun Wang, Xiaolong Wang

In this paper, we propose to learn dexterous manipulation using large-scale demonstrations with diverse 3D objects in a category, which are generated from a human grasp affordance model.

Imitation Learning Representation Learning

Multi-Person 3D Motion Prediction with Multi-Range Transformers

1 code implementation NeurIPS 2021 Jiashun Wang, Huazhe Xu, Medhini Narasimhan, Xiaolong Wang

Thus, instead of predicting each human pose trajectory in isolation, we introduce a Multi-Range Transformers model which contains of a local-range encoder for individual motion and a global-range encoder for social interactions.

motion prediction Multi-Person Pose forecasting +1

Hand-Object Contact Consistency Reasoning for Human Grasps Generation

no code implementations ICCV 2021 Hanwen Jiang, Shaowei Liu, Jiashun Wang, Xiaolong Wang

Based on the hand-object contact consistency, we design novel objectives in training the human grasp generation model and also a new self-supervised task which allows the grasp generation network to be adjusted even during test time.

Grasp Generation Object +1

Synthesizing Long-Term 3D Human Motion and Interaction in 3D Scenes

1 code implementation CVPR 2021 Jiashun Wang, Huazhe Xu, Jingwei Xu, Sifei Liu, Xiaolong Wang

Synthesizing 3D human motion plays an important role in many graphics applications as well as understanding human activity.

Motion Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.