Search Results for author: Kehong Gong

Found 5 papers, 4 papers with code

MotionMix: Weakly-Supervised Diffusion for Controllable Motion Generation

1 code implementation20 Jan 2024 Nhat M. Hoang, Kehong Gong, Chuan Guo, Michael Bi Mi

Specifically, we separate the denoising objectives of a diffusion model into two stages: obtaining conditional rough motion approximations in the initial $T-T^*$ steps by learning the noisy annotated motions, followed by the unconditional refinement of these preliminary motions during the last $T^*$ steps using unannotated motions.

Denoising

Priority-Centric Human Motion Generation in Discrete Latent Space

no code implementations ICCV 2023 Hanyang Kong, Kehong Gong, Dongze Lian, Michael Bi Mi, Xinchao Wang

We also present a motion discrete diffusion model that employs an innovative noise schedule, determined by the significance of each motion token within the entire motion sequence.

PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision

1 code implementation CVPR 2022 Kehong Gong, Bingbing Li, Jianfeng Zhang, Tao Wang, Jing Huang, Michael Bi Mi, Jiashi Feng, Xinchao Wang

Existing self-supervised 3D human pose estimation schemes have largely relied on weak supervisions like consistency loss to guide the learning, which, inevitably, leads to inferior results in real-world scenarios with unseen poses.

3D Human Pose Estimation Hallucination

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation

1 code implementation CVPR 2021 Kehong Gong, Jianfeng Zhang, Jiashi Feng

To address this problem, we present PoseAug, a new auto-augmentation framework that learns to augment the available training poses towards a greater diversity and thus improve generalization of the trained 2D-to-3D pose estimator.

 Ranked #1 on Monocular 3D Human Pose Estimation on Human3.6M (Use Video Sequence metric)

Data Augmentation Monocular 3D Human Pose Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.