A Deep Neural Framework for Continuous Sign Language Recognition by Iterative Training

This work develops a continuous sign language (SL) recognition framework with deep neural networks, which directly transcribes videos of SL sentences to sequences of ordered gloss labels. Previous methods dealing with continuous SL recognition usually employ hidden Markov models with limited capacity to capture the temporal information. In contrast, our proposed architecture adopts deep convolutional neural networks with stacked temporal fusion layers as the feature extraction module, and bi-directional recurrent neural networks as the sequence learning module. We propose an iterative optimization process for our architecture to fully exploit the representation capability of deep neural networks with limited data. We first train the end-to-end recognition model for alignment proposal, and then use the alignment proposal as strong supervisory information to directly tune the feature extraction module. This training process can run iteratively to achieve improvements on the recognition performance. We further contribute by exploring the multimodal fusion of RGB images and optical flow in sign language. Our method is evaluated on two challenging SL recognition benchmarks, and outperforms the state-of-the-art by a relative improvement of more than 15% on both databases.

PDF IEEE Transactions 2019 PDF IEEE Transactions 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Sign Language Recognition RWTH-PHOENIX-Weather 2014 DNF Word Error Rate (WER) 22.86 # 12

Methods


No methods listed for this paper. Add relevant methods here