Pyramid Spatial-Temporal Aggregation for Video-Based Person Re-Identification

Video-based person re-identification aims to associate the video clips of the same person across multiple non-overlapping cameras. Spatial-temporal representations can provide richer and complementary information between frames, which are crucial to distinguish the target person when occlusion occurs. This paper proposes a novel Pyramid Spatial-Temporal Aggregation (PSTA) framework to aggregate the frame-level features progressively and fuse the hierarchical temporal features into a final video-level representation. Thus, short-term and long-term temporal information could be well exploited by different hierarchies. Furthermore, a Spatial-Temporal Aggregation Module (STAM) is proposed to enhance the aggregation capability of PSTA. It mainly consists of two novel attention blocks: Spatial Reference Attention (SRA) and Temporal Reference Attention (TRA). SRA explores the spatial correlations within a frame to determine the attention weight of each location. While TRA extends SRA with the correlations between adjacent frames, temporal consistency information can be fully explored to suppress the interference features and strengthen the discriminative ones. Extensive experiments on several challenging benchmarks demonstrate the effectiveness of the proposed PSTA, and our full model reaches 91.5% and 98.3% Rank-1 accuracy on MARS and DukeMTMC-VID benchmarks.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Person Re-Identification DukeMTMC-VideoReID PSTA mAP 97.4 # 1
Person Re-Identification iLIDS-VID PSTA Rank-1 91.5 # 3
Person Re-Identification MARS PSTA mAP 85.8 # 6

Methods


No methods listed for this paper. Add relevant methods here