Hybrid Supervised Reinforced Model for Dialogue Systems

4 Nov 2020  ·  Carlos Miranda, Yacine Kessaci ·

This paper presents a recurrent hybrid model and training procedure for task-oriented dialogue systems based on Deep Recurrent Q-Networks (DRQN). The model copes with both tasks required for Dialogue Management: State Tracking and Decision Making. It is based on modeling Human-Machine interaction into a latent representation embedding an interaction context to guide the discussion. The model achieves greater performance, learning speed and robustness than a non-recurrent baseline. Moreover, results allow interpreting and validating the policy evolution and the latent representations information-wise.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here