Relational Network for Skeleton-Based Action Recognition

7 May 2018  ·  Wu Zheng, Lin Li, Zhao-Xiang Zhang, Yan Huang, Liang Wang ·

With the fast development of effective and low-cost human skeleton capture systems, skeleton-based action recognition has attracted much attention recently. Most existing methods use Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) to extract spatio-temporal information embedded in the skeleton sequences for action recognition. However, these approaches are limited in the ability of relational modeling in a single skeleton, due to the loss of important structural information when converting the raw skeleton data to adapt to the input format of CNN or RNN. In this paper, we propose an Attentional Recurrent Relational Network-LSTM (ARRN-LSTM) to simultaneously model spatial configurations and temporal dynamics in skeletons for action recognition. We introduce the Recurrent Relational Network to learn the spatial features in a single skeleton, followed by a multi-layer LSTM to learn the temporal features in the skeleton sequences. Between the two modules, we design an adaptive attentional module to focus attention on the most discriminative parts in the single skeleton. To exploit the complementarity from different geometries in the skeleton for sufficient relational modeling, we design a two-stream architecture to learn the structural features among joints and lines simultaneously. Extensive experiments are conducted on several popular skeleton datasets and the results show that the proposed approach achieves better results than most mainstream methods.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Skeleton Based Action Recognition NTU RGB+D ARRN-LSTM Accuracy (CV) 88.8 # 88
Accuracy (CS) 80.7 # 95

Methods