FSA-Net: Learning Fine-Grained Structure Aggregation for Head Pose Estimation From a Single Image

This paper proposes a method for head pose estimation from a single image. Previous methods often predict head poses through landmark or depth estimation and would require more computation than necessary. Our method is based on regression and feature aggregation. For having a compact model, we employ the soft stagewise regression scheme. Existing feature aggregation methods treat inputs as a bag of features and thus ignore their spatial relationship in a feature map. We propose to learn a fine-grained structure mapping for spatially grouping features before aggregation. The fine-grained structure provides part-based information and pooled values. By utilizing learnable and non-learnable importance over the spatial location, different model variants can be generated and form a complementary ensemble. Experiments show that our method outperforms the state-of-the-art methods including both the landmark-free ones and the ones based on landmark or depth estimation. With only a single RGB frame as input, our method even outperforms methods utilizing multi-modality information (RGB-D, RGB-Time) on estimating the yaw angle. Furthermore, the memory overhead of our model is 100 times smaller than those of previous methods.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Head Pose Estimation AFLW2000 FSA-Net (Caps-Fusion) MAE 5.07 # 16
Geodesic Error (GE) 8.16 # 4
Head Pose Estimation BIWI FSA-Net (Caps-Fusion) MAE (trained with other data) 4.00 # 9
Geodesic Error (GE) 7.64 # 4
MAE-aligned (trained with other data) 2.92 # 1
Geodesic Error - aligned (GE) 5.36 # 1

Methods


No methods listed for this paper. Add relevant methods here