Deep contextualized word representations

NAACL 2018 Matthew E. PetersMark NeumannMohit IyyerMatt GardnerChristopher ClarkKenton LeeLuke Zettlemoyer

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus... (read more)

PDF Abstract
TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK USES EXTRA
TRAINING DATA
RESULT BENCHMARK
Citation Intent Classification ACL-ARC BiLSTM-Attention + ELMo F1 54.6 # 3
Named Entity Recognition CoNLL 2003 (English) BiLSTM-CRF+ELMo F1 92.22 # 18
Semantic Role Labeling OntoNotes He et al., 2017 + ELMo F1 84.6 # 5
Coreference Resolution OntoNotes e2e-coref + ELMo F1 70.4 # 8
Natural Language Inference SNLI ESIM + ELMo Ensemble % Test Accuracy 89.3 # 9
% Train Accuracy 92.1 # 25
Parameters 40m # 2
Natural Language Inference SNLI ESIM + ELMo % Test Accuracy 88.7 # 13
% Train Accuracy 91.6 # 27
Parameters 8.0m # 2
Question Answering SQuAD1.1 BiDAF + Self Attention + ELMo (ensemble) EM 81.003 # 45
F1 87.432 # 49
Question Answering SQuAD1.1 BiDAF + Self Attention + ELMo (single model) EM 78.580 # 68
F1 85.833 # 68
Question Answering SQuAD1.1 dev BiDAF + Self Attention + ELMo F1 85.6 # 16
Question Answering SQuAD2.0 BiDAF + Self Attention + ELMo (single model) EM 63.372 # 168
F1 66.251 # 182
Sentiment Analysis SST-5 Fine-grained classification BCN+ELMo Accuracy 54.7 # 2

Methods used in the Paper