Learning Word Embeddings
23 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Learning Word Embeddings
Latest papers with no code
Learning Word Embeddings for Data Sparse and Sentiment Rich Data Sets
In the second approach domain adapted (DA) word embeddings are learned by exploiting the specificity of domain specific data sets and the breadth of generic word embeddings.
Subword-level Composition Functions for Learning Word Embeddings
Subword-level information is crucial for capturing the meaning and morphology of words, especially for out-of-vocabulary entries.
Adversarial Contrastive Estimation
Learning by contrasting positive and negative samples is a general strategy adopted by many methods.
Learning Word Embeddings from Speech
In this paper, we propose a novel deep neural network architecture, Sequence-to-Sequence Audio2Vec, for unsupervised learning of fixed-length vector representations of audio segments excised from a speech corpus, where the vectors contain semantic information pertaining to the segments, and are close to other vectors in the embedding space if their corresponding segments are semantically similar.
Injecting Word Embeddings with Another Language's Resource : An Application of Bilingual Embeddings
Word embeddings learned from text corpus can be improved by injecting knowledge from external resources, while at the same time also specializing them for similarity or relatedness.
Lexical Simplification with the Deep Structured Similarity Model
We explore the application of a Deep Structured Similarity Model (DSSM) to ranking in lexical simplification.
Learning Word Embeddings for Hyponymy with Entailment-Based Distributional Semantics
Lexical entailment, such as hyponymy, is a fundamental issue in the semantics of natural language.
Using $k$-way Co-occurrences for Learning Word Embeddings
Co-occurrences between two words provide useful insights into the semantics of those words.
Learning Word Embeddings from the Portuguese Twitter Stream: A Study of some Practical Aspects
Using a single GPU, we were able to scale up vocabulary size from 2048 words embedded and 500K training examples to 32768 words over 10M training examples while keeping a stable validation loss and approximately linear trend on training time per epoch.
AutoExtend: Combining Word Embeddings with Semantic Resources
We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource.