Sentence Embedding
132 papers with code • 0 benchmarks • 7 datasets
Benchmarks
These leaderboards are used to track progress in Sentence Embedding
Libraries
Use these libraries to find Sentence Embedding models and implementationsLatest papers with no code
LSTM-based Deep Neural Network With A Focus on Sentence Representation for Sequential Sentence Classification in Medical Scientific Abstracts
For this reason, the role of sentence embedding is crucial for capturing both the semantic information between words in the sentence and the contextual relationship of sentences within the abstract to provide a comprehensive representation for better classification.
Linguistic-Based Mild Cognitive Impairment Detection Using Informative Loss
Our proposed NLP framework consists of two Transformer-based modules, namely Sentence Embedding (SE) and Sentence Cross Attention (SCA).
Tracing the Genealogies of Ideas with Large Language Model Embeddings
In this paper, I present a novel method to detect intellectual influence across a large corpus.
Hierarchical Classification of Transversal Skills in Job Ads Based on Sentence Embeddings
Thus, a new approach is proposed for the hierarchical classification of transversal skills from job ads.
CodePrompt: Improving Source Code-Related Classification with Knowledge Features through Prompt Learning
Researchers have explored the potential of utilizing pre-trained language models, such as CodeBERT, to improve source code-related tasks.
Sentiment analysis in Tourism: Fine-tuning BERT or sentence embeddings concatenation?
Undoubtedly that the Bidirectional Encoder representations from Transformers is the most powerful technique in making Natural Language Processing tasks such as Named Entity Recognition, Question & Answers or Sentiment Analysis, however, the use of traditional techniques remains a major potential for the improvement of recent models, in particular word tokenization techniques and embeddings, but also the improvement of neural network architectures which are now the core of each architecture.
OrchestraLLM: Efficient Orchestration of Language Models for Dialogue State Tracking
Large language models (LLMs) have revolutionized the landscape of Natural Language Processing systems, but are computationally expensive.
Translation Aligned Sentence Embeddings for Turkish Language
Due to the limited availability of high quality datasets for training sentence embeddings in Turkish, we propose a training methodology and a regimen to develop a sentence embedding model.
TST$^\mathrm{R}$: Target Similarity Tuning Meets the Real World
Target similarity tuning (TST) is a method of selecting relevant examples in natural language (NL) to code generation through large language models (LLMs) to improve performance.
On the Dimensionality of Sentence Embeddings
Therefore, we propose a two-step training method for sentence representation learning models, wherein the encoder and the pooler are optimized separately to mitigate the overall performance loss in low-dimension scenarios.