HandOccNet: Occlusion-Robust 3D Hand Mesh Estimation Network

Hands are often severely occluded by objects, which makes 3D hand mesh estimation challenging. Previous works often have disregarded information at occluded regions. However, we argue that occluded regions have strong correlations with hands so that they can provide highly beneficial information for complete 3D hand mesh estimation. Thus, in this work, we propose a novel 3D hand mesh estimation network HandOccNet, that can fully exploits the information at occluded regions as a secondary means to enhance image features and make it much richer. To this end, we design two successive Transformer-based modules, called feature injecting transformer (FIT) and self- enhancing transformer (SET). FIT injects hand information into occluded region by considering their correlation. SET refines the output of FIT by using a self-attention mechanism. By injecting the hand information to the occluded region, our HandOccNet reaches the state-of-the-art performance on 3D hand mesh benchmarks that contain challenging hand-object occlusions. The codes are available in: https://github.com/namepllet/HandOccNet.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Hand Pose Estimation DexYCB HandOccNet Average MPJPE (mm) 14.0 # 5
Procrustes-Aligned MPJPE 5.80 # 6
MPVPE 13.1 # 6
VAUC 76.6 # 3
PA-MPVPE 5.5 # 4
PA-VAUC 89.0 # 4
3D Hand Pose Estimation HO-3D HandOccNet Average MPJPE (mm) 24.9 # 4
ST-MPJPE (mm) 24.0 # 7
PA-MPJPE (mm) 9.1 # 5

Methods


No methods listed for this paper. Add relevant methods here