Paper

End to End Dialogue Transformer

Dialogue systems attempt to facilitate conversations between humans and computers, for purposes as diverse as small talk to booking a vacation. We are here inspired by the performance of the recurrent neural network-based model Sequicity, which when conducting a dialogue uses a sequence-to-sequence architecture to first produce a textual representation of what is going on in the dialogue, and in a further step use this along with database findings to produce a reply to the user. We here propose a dialogue system based on the Transformer architecture instead of Sequicity's RNN-based architecture, that works similarly in an end-to-end, sequence-to-sequence fashion.

Results in Papers With Code
(↓ scroll down to see all results)