Semantic Composition
20 papers with code • 0 benchmarks • 2 datasets
Understanding the meaning of text by composing the meanings of the individual words in the text (Source: https://arxiv.org/pdf/1405.7908.pdf)
Benchmarks
These leaderboards are used to track progress in Semantic Composition
Latest papers
Synthetic Dataset for Evaluating Complex Compositional Knowledge for Natural Language Inference
To this end, we modify the original texts using a set of phrases - modifiers that correspond to universal quantifiers, existential quantifiers, negation, and other concept modifiers in Natural Logic (NL) (MacCartney, 2009).
Semantic Prediction: Which One Should Come First, Recognition or Prediction?
The ultimate goal of video prediction is not forecasting future pixel-values given some previous frames.
SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data
As the main discriminative information of a fine-grained image usually resides in subtle regions, methods along this line are prone to heavy label noise in fine-grained recognition.
Ontology-guided Semantic Composition for Zero-Shot Learning
Zero-shot learning (ZSL) is a popular research problem that aims at predicting for those classes that have never appeared in the training stage by utilizing the inter-class relationship with some side information.
SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics
We propose SentiBERT, a variant of BERT that effectively captures compositional sentiment semantics.
Autoencoding Pixies: Amortised Variational Inference with Graph Convolutions for Functional Distributional Semantics
Functional Distributional Semantics provides a linguistically interpretable framework for distributional semantics, by representing the meaning of a word as a function (a binary classifier), instead of a vector.
Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models
Human and metrics evaluation on both LSTM models and BERT Transformer models on multiple datasets show that our algorithms outperform prior hierarchical explanation algorithms.
No Word is an Island -- A Transformation Weighting Model for Semantic Composition
Composition models of distributional semantics are used to construct phrase representations from the representations of their words.
Semantic Hilbert Space for Text Representation Learning
To address this issue, we propose a new framework that models different levels of semantic units (e. g. sememe, word, sentence, and semantic abstraction) on a single \textit{Semantic Hilbert Space}, which naturally admits a non-linear semantic composition by means of a complex-valued vector word representation.
From Characters to Time Intervals: New Paradigms for Evaluation and Neural Parsing of Time Normalizations
This paper presents the first model for time normalization trained on the SCATE corpus.