Word Similarity

111 papers with code • 0 benchmarks • 2 datasets

Calculate a numerical score for the semantic similarity between two words.

Libraries

Use these libraries to find Word Similarity models and implementations

Most implemented papers

Construction of a Japanese Word Similarity Dataset

tmu-nlp/JapaneseWordSimilarityDataset LREC 2018

An evaluation of distributed word representation is generally conducted using a word similarity task and/or a word analogy task.

ConceptNet at SemEval-2017 Task 2: Extending Word Embeddings with Multilingual Relational Knowledge

commonsense/conceptnet-numberbatch SEMEVAL 2017

This paper describes Luminoso's participation in SemEval 2017 Task 2, "Multilingual and Cross-lingual Semantic Word Similarity", with a system based on ConceptNet.

Multimodal Word Distributions

benathi/word2gm ACL 2017

Word embeddings provide point representations of words containing useful semantic information.

A Simple Approach to Learn Polysemous Word Embeddings

dingwc/multisense 6 Jul 2017

Evaluating these methods is also problematic, as rigorous quantitative evaluations in this space is limited, especially when compared with single-sense embeddings.

Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons

HwiyeolJo/Extrofitting WS 2018

The method consists of 3 steps as follows: (i) Expanding 1 or more dimension(s) on all the word vectors, filling with their representative value.

Imparting Interpretability to Word Embeddings while Preserving Semantic Structure

koclab/imparting-interpretability 19 Jul 2018

In other words, we align words that are already determined to be related, along predefined concepts.

Learning Multilingual Word Embeddings in Latent Metric Space: A Geometric Approach

anoopkunchukuttan/geomm TACL 2019

Our approach decouples learning the transformation from the source language to the target language into (a) learning rotations for language-specific embeddings to align them to a common space, and (b) learning a similarity metric in the common space to model similarities between the embeddings.

Skip-gram word embeddings in hyperbolic space

lateral/minkowski 30 Aug 2018

Recent work has demonstrated that embeddings of tree-like graphs in hyperbolic space surpass their Euclidean counterparts in performance by a large margin.

FRAGE: Frequency-Agnostic Word Representation

ChengyueGongR/FrequencyAgnostic NeurIPS 2018

Continuous word representation (aka word embedding) is a basic building block in many neural network-based models used in natural language processing tasks.

BCWS: Bilingual Contextual Word Similarity

MiuLab/BCWS 21 Oct 2018

This paper introduces the first dataset for evaluating English-Chinese Bilingual Contextual Word Similarity, namely BCWS (https://github. com/MiuLab/BCWS).