Interacting Two-Hand 3D Pose and Shape Reconstruction From Single Color Image

In this paper, we propose a novel deep learning framework to reconstruct 3D hand poses and shapes of two interacting hands from a single color image. Previous methods designed for single hand cannot be easily applied for the two hand scenario because of the heavy inter-hand occlusion and larger solution space. In order to address the occlusion and similar appearance between hands that may confuse the network, we design a hand pose-aware attention module to extract features associated to each individual hand respectively. We then leverage the two hand context presented in interaction and propose a context-aware cascaded refinement that improves the hand pose and shape accuracy of each hand conditioned on the context between interacting hands. Extensive experiments on the main benchmark datasets demonstrate that our method predicts accurate 3D hand pose and shape from single color image, and achieves the state-of-the-art performance. Code is available in project webpage https://baowenz.github.io/Intershape/.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
3D Interacting Hand Pose Estimation InterHand2.6M InterShape MPJPE Test 13.48 # 6
MRRPE Test - # 6
MPVPE Test 13.95 # 5

Methods


No methods listed for this paper. Add relevant methods here