Intent Recognition

18 papers with code • 1 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

TEXTOIR: An Integrated and Visualized Platform for Text Open Intent Recognition

thuiar/textoir ACL 2021

It is composed of two main modules: open intent detection and open intent discovery.

Call Larisa Ivanovna: Code-Switching Fools Multilingual NLU Models

PragmaticsLab/CodeSwitchingAdversarial 29 Sep 2021

This is in line with the common understanding of how multilingual models conduct transferring between languages

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

allenai/dolma NA 2021

Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.

Training Compute-Optimal Large Language Models

karpathy/llama2.c 29 Mar 2022

We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget.

Do We Need Online NLU Tools?

petrLorenc/benchmark-intent-tools 19 Nov 2020

In this paper, we suggest criteria to choose the best intent recognition algorithm for an application.

Continual Learning in Task-Oriented Dialogue Systems

andreamad8/ToDCL EMNLP 2021

Continual learning in task-oriented dialogue systems can allow us to add new domains and functionalities through time without incurring the high cost of a whole system retraining.

Representation based meta-learning for few-shot spoken intent recognition

AshishMittal/RMLIntent 29 Jun 2021

Spoken intent detection has become a popular approach to interface with various smart devices with ease.

When More Data Hurts: A Troubling Quirk in Developing Broad-Coverage Natural Language Understanding Systems

esteng/calibration_miso 24 May 2022

Rejecting class imbalance as the sole culprit, we reveal that the trend is closely associated with an effect we call source signal dilution, where strong lexical cues for the new symbol become diluted as the training dataset grows.

New Intent Discovery with Pre-training and Contrastive Learning

zhang-yu-wei/mtp-clnn ACL 2022

Existing approaches typically rely on a large amount of labeled utterances and employ pseudo-labeling methods for representation learning and clustering, which are label-intensive, inefficient, and inaccurate.