Fast-Slow Recurrent Neural Networks

Processing sequential data of variable length is a major challenge in a wide range of applications, such as speech recognition, language modeling, generative image modeling and machine translation. Here, we address this challenge by proposing a novel recurrent neural network (RNN) architecture, the Fast-Slow RNN (FS-RNN). The FS-RNN incorporates the strengths of both multiscale RNNs and deep transition RNNs as it processes sequential data on different timescales and learns complex transition functions from one time step to the next. We evaluate the FS-RNN on two character level language modeling data sets, Penn Treebank and Hutter Prize Wikipedia, where we improve state of the art results to $1.19$ and $1.25$ bits-per-character (BPC), respectively. In addition, an ensemble of two FS-RNNs achieves $1.20$ BPC on Hutter Prize Wikipedia outperforming the best known compression algorithm with respect to the BPC measure. We also present an empirical investigation of the learning and network dynamics of the FS-RNN, which explains the improved performance compared to other RNN architectures. Our approach is general as any kind of RNN cell is a possible building block for the FS-RNN architecture, and thus can be flexibly applied to different tasks.

PDF Abstract NeurIPS 2017 PDF NeurIPS 2017 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Language Modelling enwik8 Large FS-LSTM-4 Bit per Character (BPC) 1.25 # 35
Number of params 47M # 22
Language Modelling Hutter Prize Large FS-LSTM-4 Bit per Character (BPC) 1.245 # 15
Number of params 47M # 8
Language Modelling Hutter Prize FS-LSTM-4 Bit per Character (BPC) 1.277 # 17
Number of params 27M # 16
Language Modelling Penn Treebank (Character Level) FS-LSTM-4 Bit per Character (BPC) 1.190 # 10
Number of params 27M # 1
Language Modelling Penn Treebank (Character Level) FS-LSTM-2 Bit per Character (BPC) 1.193 # 12
Number of params 27M # 1

Methods


No methods listed for this paper. Add relevant methods here