Structured Hierarchical Dialogue Policy with Graph Neural Networks

22 Sep 2020  ·  Zhi Chen, Xiaoyuan Liu, Lu Chen, Kai Yu ·

Dialogue policy training for composite tasks, such as restaurant reservation in multiple places, is a practically important and challenging problem. Recently, hierarchical deep reinforcement learning (HDRL) methods have achieved good performance in composite tasks. However, in vanilla HDRL, both top-level and low-level policies are all represented by multi-layer perceptrons (MLPs) which take the concatenation of all observations from the environment as the input for predicting actions. Thus, traditional HDRL approach often suffers from low sampling efficiency and poor transferability. In this paper, we address these problems by utilizing the flexibility of graph neural networks (GNNs). A novel ComNet is proposed to model the structure of a hierarchical agent. The performance of ComNet is tested on composited tasks of the PyDial benchmark. Experiments show that ComNet outperforms vanilla HDRL systems with performance close to the upper bound. It not only achieves sample efficiency but also is more robust to noise while maintaining the transferability to other composite tasks.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here