Simple Recurrent Units for Highly Parallelizable Recurrence

EMNLP 2018 Tao LeiYu ZhangSida I. WangHui DaiYoav Artzi

Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability... (read more)

PDF Abstract EMNLP 2018 PDF EMNLP 2018 Abstract
TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK RESULT BENCHMARK
Question Answering SQuAD1.1 SRU EM 71.4 # 126
F1 80.2 # 126
Question Answering SQuAD1.1 dev SRU EM 71.4 # 21
F1 80.2 # 25
Machine Translation WMT2014 English-German Transformer + SRU BLEU score 28.4 # 21

Methods used in the Paper