MuTr: Multi-Stage Transformer for Hand Pose Estimation from Full-Scene Depth Image

This work presents a novel transformer-based method for hand pose estimation—DePOTR. We test the DePOTR method on four benchmark datasets, where DePOTR outperforms other transformer-based methods while achieving results on par with other state-of-the-art methods. To further demonstrate the strength of DePOTR, we propose a novel multi-stage approach from full-scene depth image—MuTr. MuTr removes the necessity of having two different models in the hand pose estimation pipeline—one for hand localization and one for pose estimation—while maintaining promising results. To the best of our knowledge, this is the first successful attempt to use the same model architecture in standard and simultaneously in full-scene image setup while achieving competitive results in both of them. On the NYU dataset, DePOTR and MuTr reach precision equal to 7.85 mm and 8.71 mm, respectively.

PDF

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Hand Pose Estimation ICVL Hands DePOTR Average 3D Error 5.98 # 4
Hand Pose Estimation NYU Hands MuTr - Full-Scene Image Average 3D Error 8.71 # 10
Hand Pose Estimation NYU Hands DePOTR Average 3D Error 7.85 # 4

Methods


No methods listed for this paper. Add relevant methods here