Spoken Language Understanding
118 papers with code • 5 benchmarks • 14 datasets
Libraries
Use these libraries to find Spoken Language Understanding models and implementationsDatasets
Latest papers
Teaching a Multilingual Large Language Model to Understand Multilingual Speech via Multi-Instructional Training
Our zero-shot evaluation results confirm the robustness of our approach across multiple tasks, including speech translation and multilingual spoken language understanding, thereby opening new avenues for applying LLMs in the speech domain.
Large Language Models for Expansion of Spoken Language Understanding Systems to New Languages
In the on-device scenario (tiny and not pretrained SLU), our method improved the Overall Accuracy from 5. 31% to 22. 06% over the baseline Global-Local Contrastive Learning Framework (GL-CLeF) method.
New Semantic Task for the French Spoken Language Understanding MEDIA Benchmark
A combination ofmultiple datasets, including the MEDIA dataset, was suggested for training this joint model.
Uni-MIS: United Multiple Intent Spoken Language Understanding via Multi-View Intent-Slot Interaction
In this work, we present a novel architecture by modeling the multi-intent SLU as a multi-view intent-slot interaction.
Do Large Language Model Understand Multi-Intent Spoken Language ?
This research signifies a considerable breakthrough in leveraging Large Language Models (LLMs) for multi-intent spoken language understanding (SLU).
A BiRGAT Model for Multi-intent Spoken Language Understanding with Hierarchical Semantic Frames
Previous work on spoken language understanding (SLU) mainly focuses on single-intent settings, where each input utterance merely contains one user intent.
Pro-HAN: A Heterogeneous Graph Attention Network for Profile-Based Spoken Language Understanding
Recently, Profile-based Spoken Language Understanding (SLU) has gained increasing attention, which aims to incorporate various types of supplementary profile information (i. e., Knowledge Graph, User Profile, Context Awareness) to eliminate the prevalent ambiguities in user utterances.
Improving fairness for spoken language understanding in atypical speech with Text-to-Speech
Spoken language understanding (SLU) systems often exhibit suboptimal performance in processing atypical speech, typically caused by neurological conditions and motor impairments.
Back Transcription as a Method for Evaluating Robustness of Natural Language Understanding Models to Speech Recognition Errors
This paper proposes a method for investigating the impact of speech recognition errors on the performance of natural language understanding models.
Leveraging Multilingual Self-Supervised Pretrained Models for Sequence-to-Sequence End-to-End Spoken Language Understanding
A number of methods have been proposed for End-to-End Spoken Language Understanding (E2E-SLU) using pretrained models, however their evaluation often lacks multilingual setup and tasks that require prediction of lexical fillers, such as slot filling.