Zero-Shot Text Classification

25 papers with code • 3 benchmarks • 4 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Making Pre-trained Language Models Better Few-shot Learners

princeton-nlp/LM-BFF ACL 2021

We present LM-BFF--better few-shot fine-tuning of language models--a suite of simple and complementary techniques for fine-tuning language models on a small number of annotated examples.

Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach

yinwenpeng/BenchmarkingZeroShot IJCNLP 2019

0Shot-TC aims to associate an appropriate label with a piece of text, irrespective of the text domain and the aspect (e. g., topic, emotion, event, etc.)

Integrating Semantic Knowledge to Tackle Zero-shot Text Classification

JingqingZ/KG4ZeroShotText NAACL 2019

Insufficient or even unavailable training data of emerging classes is a big challenge of many classification tasks, including text classification.

Text Classification Using Label Names Only: A Language Model Self-Training Approach

yumeng5/LOTClass EMNLP 2020

In this paper, we explore the potential of only using the label name of each class to train classification models on unlabeled data, without using any labeled documents.

Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning

zjunlp/promptkg 29 May 2022

Specifically, vanilla prompt learning may struggle to utilize atypical instances by rote during fully-supervised training or overfit shallow patterns with low-shot data.

Evaluating Unsupervised Text Classification: Zero-shot and Similarity-based Approaches

sebischair/lbl2vec 29 Nov 2022

Text classification of unseen classes is a challenging Natural Language Processing task and is mainly attempted using two different types of approaches.

Unsupervised Label Refinement Improves Dataless Text Classification

ZeweiChu/ULR Findings (ACL) 2021

This reliance causes dataless classifiers to be highly sensitive to the choice of label descriptions and hinders the broader application of dataless classification in practice.

Issues with Entailment-based Zero-shot Text Classification

mtt1998/issues-nli ACL 2021

The general format of natural language inference (NLI) makes it tempting to be used for zero-shot text classification by casting any target label into a sentence of hypothesis and verifying whether or not it could be entailed by the input, aiming at generic classification applicable on any specified label space.

NSP-BERT: A Prompt-based Few-Shot Learner Through an Original Pre-training Task--Next Sentence Prediction

sunyilgdx/prompts4keras 8 Sep 2021

Using prompts to utilize language models to perform various downstream tasks, also known as prompt-based learning or prompt-learning, has lately gained significant success in comparison to the pre-train and fine-tune paradigm.

Generating Training Data with Language Models: Towards Zero-Shot Language Understanding

yumeng5/supergen 9 Feb 2022

Pretrained language models (PLMs) have demonstrated remarkable performance in various natural language processing tasks: Unidirectional PLMs (e. g., GPT) are well known for their superior text generation capabilities; bidirectional PLMs (e. g., BERT) have been the prominent choice for natural language understanding (NLU) tasks.