Intent Classification

94 papers with code • 5 benchmarks • 13 datasets

Intent Classification is the task of correctly labeling a natural language utterance from a predetermined set of intents

Source: Multi-Layer Ensembling Techniques for Multilingual Intent Classification

Libraries

Use these libraries to find Intent Classification models and implementations

Latest papers with no code

Sparse Multitask Learning for Efficient Neural Representation of Motor Imagery and Execution

no code yet • 10 Dec 2023

In the quest for efficient neural network models for neural data interpretation and user intent classification in brain-computer interfaces (BCIs), learning meaningful sparse representations of the underlying neural subspaces is crucial.

Generalized zero-shot audio-to-intent classification

no code yet • 4 Nov 2023

Our multimodal training approach improves the accuracy of zero-shot intent classification on unseen intents of SLURP by 2. 75% and 18. 2% for the SLURP and internal goal-oriented dialog datasets, respectively, compared to audio-only training.

Privacy-preserving Representation Learning for Speech Understanding

no code yet • 26 Oct 2023

In this paper, we present a novel framework to anonymize utterance-level speech embeddings generated by pre-trained encoders and show its effectiveness for a range of speech classification tasks.

IntenDD: A Unified Contrastive Learning Approach for Intent Detection and Discovery

no code yet • 25 Oct 2023

Further, the intent classification may be modeled in a multiclass (MC) or multilabel (ML) setup.

SNOiC: Soft Labeling and Noisy Mixup based Open Intent Classification Model

no code yet • 11 Oct 2023

SNOiC combines Soft Labeling and Noisy Mixup strategies to reduce the biasing and generate pseudo-data for open intent class.

Improving End-to-End Speech Processing by Efficient Text Data Utilization with Latent Synthesis

no code yet • 9 Oct 2023

For SLU, LaSyn improves our E2E baseline by absolute 4. 1% for intent classification accuracy and 3. 8% for slot filling SLU-F1 on SLURP, and absolute 4. 49% and 2. 25% for exact match (EM) and EM-Tree accuracies on STOP respectively.

CWCL: Cross-Modal Transfer with Continuously Weighted Contrastive Loss

no code yet • NeurIPS 2023

This paper considers contrastive training for cross-modal 0-shot transfer wherein a pre-trained model in one modality is used for representation learning in another domain using pairwise data.

In-Context Learning for Text Classification with Many Labels

no code yet • 19 Sep 2023

We analyze the performance across number of in-context examples and different model scales, showing that larger models are necessary to effectively and consistently make use of larger context lengths for ICL.

Leveraging Large Language Models for Exploiting ASR Uncertainty

no code yet • 9 Sep 2023

While large language models excel in a variety of natural language processing (NLP) tasks, to perform well on spoken language understanding (SLU) tasks, they must either rely on off-the-shelf automatic speech recognition (ASR) systems for transcription, or be equipped with an in-built speech modality.

Enhancing Pipeline-Based Conversational Agents with Large Language Models

no code yet • 7 Sep 2023

A hybrid approach in which LLMs' are integrated into the pipeline-based agents allows them to save time and costs of building and running agents by capitalizing on the capabilities of LLMs while retaining the integration and privacy safeguards of their existing systems.