Word Similarity
111 papers with code • 0 benchmarks • 2 datasets
Calculate a numerical score for the semantic similarity between two words.
Benchmarks
These leaderboards are used to track progress in Word Similarity
Libraries
Use these libraries to find Word Similarity models and implementationsMost implemented papers
Construction of a Japanese Word Similarity Dataset
An evaluation of distributed word representation is generally conducted using a word similarity task and/or a word analogy task.
ConceptNet at SemEval-2017 Task 2: Extending Word Embeddings with Multilingual Relational Knowledge
This paper describes Luminoso's participation in SemEval 2017 Task 2, "Multilingual and Cross-lingual Semantic Word Similarity", with a system based on ConceptNet.
Multimodal Word Distributions
Word embeddings provide point representations of words containing useful semantic information.
A Simple Approach to Learn Polysemous Word Embeddings
Evaluating these methods is also problematic, as rigorous quantitative evaluations in this space is limited, especially when compared with single-sense embeddings.
Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons
The method consists of 3 steps as follows: (i) Expanding 1 or more dimension(s) on all the word vectors, filling with their representative value.
Imparting Interpretability to Word Embeddings while Preserving Semantic Structure
In other words, we align words that are already determined to be related, along predefined concepts.
Learning Multilingual Word Embeddings in Latent Metric Space: A Geometric Approach
Our approach decouples learning the transformation from the source language to the target language into (a) learning rotations for language-specific embeddings to align them to a common space, and (b) learning a similarity metric in the common space to model similarities between the embeddings.
Skip-gram word embeddings in hyperbolic space
Recent work has demonstrated that embeddings of tree-like graphs in hyperbolic space surpass their Euclidean counterparts in performance by a large margin.
FRAGE: Frequency-Agnostic Word Representation
Continuous word representation (aka word embedding) is a basic building block in many neural network-based models used in natural language processing tasks.
BCWS: Bilingual Contextual Word Similarity
This paper introduces the first dataset for evaluating English-Chinese Bilingual Contextual Word Similarity, namely BCWS (https://github. com/MiuLab/BCWS).