Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks

18 Mar 2017  ·  Zhilin Yang, Ruslan Salakhutdinov, William W. Cohen ·

Recent papers have shown that neural networks obtain state-of-the-art performance on several different sequence tagging tasks. One appealing property of such systems is their generality, as excellent performance can be achieved with a unified architecture and without task-specific feature engineering. However, it is unclear if such systems can be used for tasks without large amounts of training data. In this paper we explore the problem of transfer learning for neural sequence taggers, where a source task with plentiful annotations (e.g., POS tagging on Penn Treebank) is used to improve performance on a target task with fewer available annotations (e.g., POS tagging for microblogs). We examine the effects of transfer learning for deep hierarchical recurrent networks across domains, applications, and languages, and show that significant improvement can often be obtained. These improvements lead to improvements over the current state-of-the-art on several well-studied tasks.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Named Entity Recognition (NER) CoNLL 2003 (English) Yang et al. F1 91.26 # 64
Part-Of-Speech Tagging Penn Treebank Yang et al. Accuracy 97.55 # 10

Methods


No methods listed for this paper. Add relevant methods here