A Bi-model based RNN Semantic Frame Parsing Model for Intent Detection and Slot Filling

NAACL 2018  ·  Yu Wang, Yilin Shen, Hongxia Jin ·

Intent detection and slot filling are two main tasks for building a spoken language understanding(SLU) system. Multiple deep learning based models have demonstrated good results on these tasks . The most effective algorithms are based on the structures of sequence to sequence models (or "encoder-decoder" models), and generate the intents and semantic tags either using separate models or a joint model. Most of the previous studies, however, either treat the intent detection and slot filling as two separate parallel tasks, or use a sequence to sequence model to generate both semantic tags and intent. Most of these approaches use one (joint) NN based model (including encoder-decoder structure) to model two tasks, hence may not fully take advantage of the cross-impact between them. In this paper, new Bi-model based RNN semantic frame parsing network structures are designed to perform the intent detection and slot filling tasks jointly, by considering their cross-impact to each other using two correlated bidirectional LSTMs (BLSTM). Our Bi-model structure with a decoder achieves state-of-the-art result on the benchmark ATIS data, with about 0.5$\%$ intent accuracy improvement and 0.9 $\%$ slot filling improvement.

PDF Abstract NAACL 2018 PDF NAACL 2018 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Intent Detection ATIS Bi-model with decoder Accuracy 98.99 # 1
F1 96.89 # 3
Slot Filling ATIS Bi-model with a decoder F1 0.9689 # 2

Methods


No methods listed for this paper. Add relevant methods here