Sentence Classification
104 papers with code • 6 benchmarks • 14 datasets
Libraries
Use these libraries to find Sentence Classification models and implementationsDatasets
Most implemented papers
Augmenting Data with Mixup for Sentence Classification: An Empirical Study
Mixup, a recent proposed data augmentation method through linearly interpolating inputs and modeling targets of random samples, has demonstrated its capability of significantly improving the predictive accuracy of the state-of-the-art networks for image classification.
Investigating an Effective Character-level Embedding in Korean Sentence Classification
Different from the writing systems of many Romance and Germanic languages, some languages or language families show complex conjunct forms in character composition.
On Dimensional Linguistic Properties of the Word Embedding Space
Word embeddings have become a staple of several natural language processing tasks, yet much remains to be understood about their properties.
CIRCE at SemEval-2020 Task 1: Ensembling Context-Free and Context-Dependent Word Representations
This paper describes the winning contribution to SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection (Subtask 2) handed in by team UG Student Intern.
GAN-BERT: Generative Adversarial Learning for Robust Text Classification with a Bunch of Labeled Examples
Recent Transformer-based architectures, e. g., BERT, provide impressive results in many Natural Language Processing tasks.
Voice@SRIB at SemEval-2020 Task 9 and 12: Stacked Ensembling method for Sentiment and Offensiveness detection in Social Media
The use of pre-trained embeddings usually helps in multiple tasks such as sentence classification, and machine translation.
CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark
Artificial Intelligence (AI), along with the recent progress in biomedical language understanding, is gradually changing medical practice.
Prompt-Tuning Can Be Much Better Than Fine-Tuning on Cross-lingual Understanding With Multilingual Language Models
Pre-trained multilingual language models show significant performance gains for zero-shot cross-lingual model transfer on a wide range of natural language understanding (NLU) tasks.
Crosslingual Transfer Learning for Low-Resource Languages Based on Multilingual Colexification Graphs
ColexNet's nodes are concepts and its edges are colexifications.
Active Discriminative Text Representation Learning
We also show that, as expected, the method quickly learns discriminative word embeddings.