TENER: Adapting Transformer Encoder for Named Entity Recognition

10 Nov 2019  ·  Hang Yan, Bocao Deng, Xiaonan Li, Xipeng Qiu ·

The Bidirectional long short-term memory networks (BiLSTM) have been widely used as an encoder in models solving the named entity recognition (NER) task. Recently, the Transformer is broadly adopted in various Natural Language Processing (NLP) tasks owing to its parallelism and advantageous performance. Nevertheless, the performance of the Transformer in NER is not as good as it is in other NLP tasks. In this paper, we propose TENER, a NER architecture adopting adapted Transformer Encoder to model the character-level features and word-level features. By incorporating the direction and relative distance aware attention and the un-scaled attention, we prove the Transformer-like encoder is just as effective for NER as other NLP tasks.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Named Entity Recognition (NER) CoNLL 2003 (English) TENER F1 92.62 # 35
Chinese Named Entity Recognition MSRA TENER F1 92.74 # 21
Chinese Named Entity Recognition Resume NER TENER F1 95 # 11
Chinese Named Entity Recognition Weibo NER TENER F1 58.17 # 16

Methods