A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image

For 3D hand and body pose estimation task in depth image, a novel anchor-based approach termed Anchor-to-Joint regression network (A2J) with the end-to-end learning ability is proposed. Within A2J, anchor points able to capture global-local spatial context information are densely set on depth image as local regressors for the joints. They contribute to predict the positions of the joints in ensemble way to enhance generalization ability. The proposed 3D articulated pose estimation paradigm is different from the state-of-the-art encoder-decoder based FCN, 3D CNN and point-set based manners. To discover informative anchor points towards certain joint, anchor proposal procedure is also proposed for A2J. Meanwhile 2D CNN (i.e., ResNet-50) is used as backbone network to drive A2J, without using time-consuming 3D convolutional or deconvolutional layers. The experiments on 3 hand datasets and 2 body datasets verify A2J's superiority. Meanwhile, A2J is of high running speed around 100 FPS on single NVIDIA 1080Ti GPU.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Hand Pose Estimation HANDS 2017 A2J Average 3D Error 8.57 # 3
Hand Pose Estimation ICVL Hands A2J Average 3D Error 6.461 # 9
FPS 105.06 # 1
Pose Estimation ITOP front-view A2J Mean mAP 88.0 # 4
3D Pose Estimation K2HPD A2J FPS 93.78 # 1
Hand Pose Estimation K2HPD A2J PDJ@5mm 76.3 # 1
Depth Estimation NYU-Depth V2 A2J mAP 8.61 # 1
Hand Pose Estimation NYU Hands A2J Average 3D Error 8.61 # 9
FPS 105.06 # 1

Methods