Unsupervised Random Walk Sentence Embeddings: A Strong but Simple Baseline

WS 2018  ·  Kawin Ethayarajh ·

Using a random walk model of text generation, Arora et al. (2017) proposed a strong baseline for computing sentence embeddings: take a weighted average of word embeddings and modify with SVD. This simple method even outperforms far more complex approaches such as LSTMs on textual similarity tasks. In this paper, we first show that word vector length has a confounding effect on the probability of a sentence being generated in Arora et al.{'}s model. We propose a random walk model that is robust to this confound, where the probability of word generation is inversely related to the angular distance between the word and sentence embeddings. Our approach beats Arora et al.{'}s by up to 44.4{\%} on textual similarity tasks and is competitive with state-of-the-art methods. Unlike Arora et al.{'}s method, ours requires no hyperparameter tuning, which means it can be used when there is no labelled data.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here