Paper

Cross-lingual Approaches for Task-specific Dialogue Act Recognition

In this paper we exploit cross-lingual models to enable dialogue act recognition for specific tasks with a small number of annotations. We design a transfer learning approach for dialogue act recognition and validate it on two different target languages and domains. We compute dialogue turn embeddings with both a CNN and multi-head self-attention model and show that the best results are obtained by combining all sources of transferred information. We further demonstrate that the proposed methods significantly outperform related cross-lingual DA recognition approaches.

Results in Papers With Code
(↓ scroll down to see all results)