Word Embeddings

Contextual Word Vectors

Introduced by McCann et al. in Learned in Translation: Contextualized Word Vectors

CoVe, or Contextualized Word Vectors, uses a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation to contextualize word vectors. $\text{CoVe}$ word embeddings are therefore a function of the entire input sequence. These word embeddings can then be used in downstream tasks by concatenating them with $\text{GloVe}$ embeddings:

$$ v = \left[\text{GloVe}\left(x\right), \text{CoVe}\left(x\right)\right]$$

and then feeding these in as features for the task-specific models.

Source: Learned in Translation: Contextualized Word Vectors

Papers


Paper Code Results Date Stars

Categories