Search Results for author: David Vandyke

Found 19 papers, 6 papers with code

TOAD: Task-Oriented Automatic Dialogs with Diverse Response Styles

no code implementations15 Feb 2024 Yinhong Liu, Yimai Fang, David Vandyke, Nigel Collier

In light of recent advances in large language models (LLMs), the expectations for the next generation of virtual assistants include enhanced naturalness and adaptability across diverse usage scenarios.

Response Generation

Plan-then-Generate: Controlled Data-to-Text Generation via Planning

2 code implementations Findings (EMNLP) 2021 Yixuan Su, David Vandyke, Sihui Wang, Yimai Fang, Nigel Collier

However, the lack of ability of neural models to control the structure of generated output can be limiting in certain real-world applications.

Data-to-Text Generation Sentence

A Generative Model for Joint Natural Language Understanding and Generation

1 code implementation ACL 2020 Bo-Hsiang Tseng, Jianpeng Cheng, Yimai Fang, David Vandyke

This approach allows us to explore both spaces of natural language and formal representations, and facilitates information sharing through the latent space to eventually benefit NLU and NLG.

Natural Language Understanding Task-Oriented Dialogue Systems +1

Multi-domain Neural Network Language Generation for Spoken Dialogue Systems

no code implementations NAACL 2016 Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M. Rojas-Barahona, Pei-Hao Su, David Vandyke, Steve Young

Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains.

Domain Adaptation Spoken Dialogue Systems +1

Counter-fitting Word Vectors to Linguistic Constraints

2 code implementations NAACL 2016 Nikola Mrkšić, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gašić, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, Steve Young

In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors' capability for judging semantic similarity.

Dialogue State Tracking Semantic Similarity +1

Learning from Real Users: Rating Dialogue Success with Neural Networks for Reinforcement Learning in Spoken Dialogue Systems

no code implementations13 Aug 2015 Pei-Hao Su, David Vandyke, Milica Gasic, Dongho Kim, Nikola Mrksic, Tsung-Hsien Wen, Steve Young

The models are trained on dialogues generated by a simulated user and the best model is then used to train a policy on-line which is shown to perform at least as well as a baseline system using prior knowledge of the user's task.

Spoken Dialogue Systems

Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking

no code implementations WS 2015 Tsung-Hsien Wen, Milica Gasic, Dongho Kim, Nikola Mrksic, Pei-Hao Su, David Vandyke, Steve Young

The natural language generation (NLG) component of a spoken dialogue system (SDS) usually needs a substantial amount of handcrafting or a well-labeled dataset to be trained on.

Sentence Text Generation

Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems

2 code implementations EMNLP 2015 Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-Hao Su, David Vandyke, Steve Young

Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality.

Informativeness Sentence +2

Cannot find the paper you are looking for? You can Submit a new open access paper.