Learning Word Embeddings
23 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Learning Word Embeddings
Latest papers
Learning Word Embeddings with Domain Awareness
Word embeddings are traditionally trained on a large corpus in an unsupervised setting, with no specific design for incorporating domain knowledge.
Poincare Glove: Hyperbolic Word Embeddings
Words are not created equal.
Cross-lingual Lexical Sememe Prediction
We propose a novel framework to model correlations between sememes and multi-lingual words in low-dimensional semantic space for sememe prediction.
Incorporating Syntactic and Semantic Information in Word Embeddings using Graph Convolutional Networks
Word embeddings have been widely adopted across several NLP applications.
Skip-gram word embeddings in hyperbolic space
Recent work has demonstrated that embeddings of tree-like graphs in hyperbolic space surpass their Euclidean counterparts in performance by a large margin.
Speech2Vec: A Sequence-to-Sequence Framework for Learning Word Embeddings from Speech
In this paper, we propose a novel deep neural network architecture, Speech2Vec, for learning fixed-length vector representations of audio segments excised from a speech corpus, where the vectors contain semantic information pertaining to the underlying spoken words, and are close to other vectors in the embedding space if their corresponding underlying spoken words are semantically similar.
Grammatical Error Detection Using Error- and Grammaticality-Specific Word Embeddings
In this study, we improve grammatical error detection by learning word embeddings that consider grammaticality and error patterns.
MIPA: Mutual Information Based Paraphrase Acquisition via Bilingual Pivoting
We present a pointwise mutual information (PMI)-based approach to formalize paraphrasability and propose a variant of PMI, called MIPA, for the paraphrase acquisition.
Dict2vec : Learning Word Embeddings using Lexical Dictionaries
Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks.
The Mixing method: low-rank coordinate descent for semidefinite programming with diagonal constraints
In this paper, we propose a low-rank coordinate descent approach to structured semidefinite programming with diagonal constraints.