Learning Word Embeddings

23 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Latest papers with no code

Learning Word Embeddings for Data Sparse and Sentiment Rich Data Sets

no code yet • NAACL 2018

In the second approach domain adapted (DA) word embeddings are learned by exploiting the specificity of domain specific data sets and the breadth of generic word embeddings.

Subword-level Composition Functions for Learning Word Embeddings

no code yet • WS 2018

Subword-level information is crucial for capturing the meaning and morphology of words, especially for out-of-vocabulary entries.

Adversarial Contrastive Estimation

no code yet • ACL 2018

Learning by contrasting positive and negative samples is a general strategy adopted by many methods.

Learning Word Embeddings from Speech

no code yet • 5 Nov 2017

In this paper, we propose a novel deep neural network architecture, Sequence-to-Sequence Audio2Vec, for unsupervised learning of fixed-length vector representations of audio segments excised from a speech corpus, where the vectors contain semantic information pertaining to the segments, and are close to other vectors in the embedding space if their corresponding segments are semantically similar.

Injecting Word Embeddings with Another Language's Resource : An Application of Bilingual Embeddings

no code yet • IJCNLP 2017

Word embeddings learned from text corpus can be improved by injecting knowledge from external resources, while at the same time also specializing them for similarity or relatedness.

Lexical Simplification with the Deep Structured Similarity Model

no code yet • IJCNLP 2017

We explore the application of a Deep Structured Similarity Model (DSSM) to ranking in lexical simplification.

Learning Word Embeddings for Hyponymy with Entailment-Based Distributional Semantics

no code yet • 6 Oct 2017

Lexical entailment, such as hyponymy, is a fundamental issue in the semantics of natural language.

Using $k$-way Co-occurrences for Learning Word Embeddings

no code yet • 5 Sep 2017

Co-occurrences between two words provide useful insights into the semantics of those words.

Learning Word Embeddings from the Portuguese Twitter Stream: A Study of some Practical Aspects

no code yet • 4 Sep 2017

Using a single GPU, we were able to scale up vocabulary size from 2048 words embedded and 500K training examples to 32768 words over 10M training examples while keeping a stable validation loss and approximately linear trend on training time per epoch.

AutoExtend: Combining Word Embeddings with Semantic Resources

no code yet • CL 2017

We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource.