# Word Embeddings Edit

652 papers with code · Methodology

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

No evaluation results yet. Help compare methods by submit evaluation metrics.

# Adversarial Training Methods for Semi-Supervised Text Classification

25 May 2016tensorflow/models

Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting.

66,757

# FastText.zip: Compressing text classification models

We consider the problem of producing compact architectures for text classification, such that the full model fits in a limited amount of memory.

21,742

# Enriching Word Vectors with Subword Information

A vector representation is associated to each character $n$-gram; words being represented as the sum of these representations.

21,742

# Toward Better Storylines with Sentence-Level Language Models

We propose a sentence-level language model which selects the next sentence in a story from a finite set of fluent alternatives.

13,024

# Named Entity Recognition with Bidirectional LSTM-CNNs

Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance.

9,467

# Contextual String Embeddings for Sequence Labeling

Recent advances in language modeling using recurrent neural networks have made it viable to model language as distributions over characters.

9,453

# Learning Multilingual Word Embeddings in Latent Metric Space: A Geometric Approach

Our approach decouples learning the transformation from the source language to the target language into (a) learning rotations for language-specific embeddings to align them to a common space, and (b) learning a similarity metric in the common space to model similarities between the embeddings.

8,441

# Analogical Reasoning on Chinese Morphological and Semantic Relations

Analogical reasoning is effective in capturing linguistic regularities.

7,889