This task aims to solve absolute (not root-relative) 3D human pose estimation. This also means NO GROUNDTRUTH INFORMATION is used in testing stage including human bounding box and human root joint coordinate. Models are trained on subject 1,5,6,7,8 and tested on subject 9,11 without rigid alignment.
( Image credit: RootNet )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation.
Ranked #7 on Semantic Segmentation on NYU Depth v2
Although significant improvement has been achieved in 3D human pose estimation, most of the previous methods only consider a single-person case.
Ranked #1 on 3D Multi-Person Pose Estimation (absolute) on MuPoTS-3D (using extra training data)
Although significant improvement has been achieved recently in 3D human pose estimation, most of the previous methods only treat a single-person case.
Ranked #1 on 3D Absolute Human Pose Estimation on Human3.6M
Then we lift the multi-view 2D poses to the 3D space by an Orientation Regularized Pictorial Structure Model (ORPSM) which jointly minimizes the projection error between the 3D and 2D poses, along with the discrepancy between the 3D pose and IMU orientations.
Ranked #1 on 3D Human Pose Estimation on Total Capture