Adapted Human Pose: Monocular 3D Human Pose Estimation with Zero Real 3D Pose Data

23 May 2021  ·  Shuangjun Liu, Naveen Sehgal, Sarah Ostadabbas ·

The ultimate goal for an inference model is to be robust and functional in real life applications. However, training vs. test data domain gaps often negatively affect model performance. This issue is especially critical for the monocular 3D human pose estimation problem, in which 3D human data is often collected in a controlled lab setting. In this paper, we focus on alleviating the negative effect of domain shift in both appearance and pose space for 3D human pose estimation by presenting our adapted human pose (AHuP) approach. AHuP is built upon two key components: (1) semantically aware adaptation (SAA) for the cross-domain feature space adaptation, and (2) skeletal pose adaptation (SPA) for the pose space adaptation which takes only limited information from the target domain. By using zero real 3D human pose data, one of our adapted synthetic models shows comparable performance with the SOTA pose estimation models trained with large scale real 3D human datasets. The proposed SPA can be also employed independently as a light-weighted head to improve existing SOTA models in a novel context. A new 3D scan-based synthetic human dataset called ScanAva+ is also going to be publicly released with this work.

PDF Abstract

Results from the Paper


Ranked #116 on 3D Human Pose Estimation on Human3.6M (PA-MPJPE metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Human Pose Estimation Human3.6M Adapted Human Pose PA-MPJPE 85.1 # 116

Methods