context2vec is an unsupervised model for learning generic context embedding of wide sentential contexts, using a bidirectional LSTM. A large plain text corpora is trained on to learn a neural model that embeds entire sentential contexts and target words in the same low-dimensional space, which is optimized to reflect inter-dependencies between targets and their entire sentential context as a whole.

In contrast to word2vec that use context modeling mostly internally and considers the target word embeddings as their main output, the focus of context2vec is the context representation. context2vec achieves its objective by assigning similar embeddings to sentential contexts and their associated target words.

Source: context2vec: Learning Generic Context Embedding with Bidirectional LSTM

Latest Papers

PAPER DATE
Token Level Identification of Multiword Expressions Using Contextual Information
REYHANEH HASHEMPOURAline Villavicencio
2020-07-01
A Comparative Study of Lexical Substitution Approaches based on Neural Language Models
Nikolay ArefyevBoris SheludkoAlexander PodolskiyAlexander Panchenko
2020-05-29
Word Usage Similarity Estimation with Sentence Representations and Automatic Substitutes
Aina Garí SolerMarianna ApidianakiAlexandre Allauzen
2019-05-20
Lexical Substitution for Evaluating Compositional Distributional Models
Maja BuljanSebastian Pad{\'o}Jan {\v{S}}najder
2018-06-01
context2vec: Learning Generic Context Embedding with Bidirectional LSTM
Oren MelamudJacob GoldbergerIdo Dagan
2016-08-01

Categories