DG-STGCN: Dynamic Spatial-Temporal Modeling for Skeleton-based Action Recognition

12 Oct 2022  ·  Haodong Duan, Jiaqi Wang, Kai Chen, Dahua Lin ·

Graph convolution networks (GCN) have been widely used in skeleton-based action recognition. We note that existing GCN-based approaches primarily rely on prescribed graphical structures (ie., a manually defined topology of skeleton joints), which limits their flexibility to capture complicated correlations between joints. To move beyond this limitation, we propose a new framework for skeleton-based action recognition, namely Dynamic Group Spatio-Temporal GCN (DG-STGCN). It consists of two modules, DG-GCN and DG-TCN, respectively, for spatial and temporal modeling. In particular, DG-GCN uses learned affinity matrices to capture dynamic graphical structures instead of relying on a prescribed one, while DG-TCN performs group-wise temporal convolutions with varying receptive fields and incorporates a dynamic joint-skeleton fusion module for adaptive multi-level temporal modeling. On a wide range of benchmarks, including NTURGB+D, Kinetics-Skeleton, BABEL, and Toyota SmartHome, DG-STGCN consistently outperforms state-of-the-art methods, often by a notable margin.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Skeleton Based Action Recognition NTU RGB+D DG-STGCN Accuracy (CV) 97.5 # 4
Accuracy (CS) 93.2 # 7
Ensembled Modalities 4 # 2
Skeleton Based Action Recognition NTU RGB+D 120 DG-STGCN Accuracy (Cross-Subject) 89.6 # 11
Accuracy (Cross-Setup) 91.3 # 6
Ensembled Modalities 4 # 1

Methods