Deep contextualized word representations

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.

PDF Abstract NAACL 2018 PDF NAACL 2018 Abstract

Results from the Paper


Ranked #3 on Only Connect Walls Dataset Task 1 (Grouping) on OCW (Wasserstein Distance (WD) metric, using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Citation Intent Classification ACL-ARC BiLSTM-Attention + ELMo F1 54.6 # 4
Named Entity Recognition (NER) CoNLL++ BiLSTM-CRF+ELMo F1 93.42 # 6
Named Entity Recognition (NER) CoNLL 2003 (English) BiLSTM-CRF+ELMo F1 92.22 # 44
Only Connect Walls Dataset Task 1 (Grouping) OCW ELMo (LARGE) # Correct Groups 55 ± 4 # 17
Fowlkes Mallows Score (FMS) 29.5 ± .3 # 16
Adjusted Rand Index (ARI) 11.8 ± .4 # 16
Adjusted Mutual Information (AMI) 14.5 ± .4 # 16
# Solved Walls 0 ± 0 # 10
Wasserstein Distance (WD) 86.3 ± .6 # 3
Semantic Role Labeling OntoNotes He et al., 2017 + ELMo F1 84.6 # 12
Coreference Resolution OntoNotes e2e-coref + ELMo F1 70.4 # 20
Conversational Response Selection PolyAI Reddit ELMO 1-of-100 Accuracy 19.3% # 5
Natural Language Inference SNLI ESIM + ELMo Ensemble % Test Accuracy 89.3 # 17
% Train Accuracy 92.1 # 32
Parameters 40m # 4
Natural Language Inference SNLI ESIM + ELMo % Test Accuracy 88.7 # 29
% Train Accuracy 91.6 # 34
Parameters 8.0m # 4
Question Answering SQuAD1.1 BiDAF + Self Attention + ELMo (single model) EM 78.58 # 85
F1 85.833 # 87
Question Answering SQuAD1.1 BiDAF + Self Attention + ELMo (ensemble) EM 81.003 # 55
F1 87.432 # 63
Question Answering SQuAD1.1 dev BiDAF + Self Attention + ELMo F1 85.6 # 23
Question Answering SQuAD2.0 BiDAF + Self Attention + ELMo (single model) EM 63.372 # 266
F1 66.251 # 271
Sentiment Analysis SST-5 Fine-grained classification BCN+ELMo Accuracy 54.7 # 7
Word Sense Disambiguation Supervised: ELMo Senseval 2 71.6 # 23
Senseval 3 69.6 # 20
SemEval 2007 62.2 # 18
SemEval 2013 66.2 # 22
SemEval 2015 71.3 # 23

Methods