All-but-the-Top: Simple and Effective Postprocessing for Word Representations

ICLR 2018  ·  Jiaqi Mu, Suma Bhat, Pramod Viswanath ·

Real-valued word representations have transformed NLP applications; popular examples are word2vec and GloVe, recognized for their ability to capture linguistic regularities. In this paper, we demonstrate a {\em very simple}, and yet counter-intuitive, postprocessing technique -- eliminate the common mean vector and a few top dominating directions from the word vectors -- that renders off-the-shelf representations {\em even stronger}. The postprocessing is empirically validated on a variety of lexical-level intrinsic tasks (word similarity, concept categorization, word analogy) and sentence-level tasks (semantic textural similarity and { text classification}) on multiple datasets and with a variety of representation methods and hyperparameter choices in multiple languages; in each case, the processed representations are consistently better than the original ones.

PDF Abstract ICLR 2018 PDF ICLR 2018 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Sentiment Analysis MR GRU-RNN-WORD2VEC Accuracy 78.26 # 11
Sentiment Analysis SST-5 Fine-grained classification GRU-RNN-WORD2VEC Accuracy 45.02 # 25
Subjectivity Analysis SUBJ GRU-RNN-GLOVE Accuracy 91.85 # 15
Text Classification TREC-6 GRU-RNN-GLOVE Error 7.0 # 13

Methods