Search Results for author: Steve Young

Found 36 papers, 5 papers with code

Unsupervised Inflection Generation Using Neural Language Modeling

no code implementations3 Dec 2019 Octavia-Maria Sulea, Steve Young

The use of Deep Neural Network architectures for Language Modeling has recently seen a tremendous increase in interest in the field of NLP with the advent of transfer learning and the shift in focus from rule-based and predictive models (supervised learning) to generative or unsupervised models to solve the long-standing problems in NLP like Information Extraction or Question Answering.

Language Modelling Question Answering +1

Addressing Objects and Their Relations: The Conversational Entity Dialogue Model

no code implementations WS 2018 Stefan Ultes, Paweł\ Budzianowski, Iñigo Casanueva, Lina Rojas-Barahona, Bo-Hsiang Tseng, Yen-chen Wu, Steve Young, Milica Gašić

Statistical spoken dialogue systems usually rely on a single- or multi-domain dialogue model that is restricted in its capabilities of modelling complex dialogue structures, e. g., relations.

Spoken Dialogue Systems

Sample-efficient Actor-Critic Reinforcement Learning with Supervised Data for Dialogue Management

no code implementations WS 2017 Pei-Hao Su, Pawel Budzianowski, Stefan Ultes, Milica Gasic, Steve Young

Firstly, to speed up the learning process, two sample-efficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actor-critic with experience replay (eNACER) are presented.

Dialogue Management Management +2

Morph-fitting: Fine-Tuning Word Vector Spaces with Simple Language-Specific Rules

no code implementations ACL 2017 Ivan Vulić, Nikola Mrkšić, Roi Reichart, Diarmuid Ó Séaghdha, Steve Young, Anna Korhonen

Morphologically rich languages accentuate two properties of distributional vector space models: 1) the difficulty of inducing accurate representations for low-frequency word forms; and 2) insensitivity to distinct lexical relations that have similar distributional signatures.

Dialogue State Tracking MORPH

Latent Intention Dialogue Models

1 code implementation ICML 2017 Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, Steve Young

Developing a dialogue agent that is capable of making autonomous decisions and communicating by natural language is one of the long-term goals of machine learning research.

reinforcement-learning Reinforcement Learning (RL) +1

Learning Tone and Attribution for Financial Text Mining

no code implementations LREC 2016 Mahmoud El-Haj, Paul Rayson, Steve Young, Andrew Moore, Martin Walker, Thomas Schleicher, Vasiliki Athanasakou

Previous studies have only applied manual content analysis on a small scale to reveal such a bias in the narrative section of annual financial reports.

Attribute BIG-bench Machine Learning

Multi-domain Neural Network Language Generation for Spoken Dialogue Systems

no code implementations NAACL 2016 Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M. Rojas-Barahona, Pei-Hao Su, David Vandyke, Steve Young

Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains.

Domain Adaptation Spoken Dialogue Systems +1

Counter-fitting Word Vectors to Linguistic Constraints

2 code implementations NAACL 2016 Nikola Mrkšić, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gašić, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, Steve Young

In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors' capability for judging semantic similarity.

Dialogue State Tracking Semantic Similarity +1

Learning from Real Users: Rating Dialogue Success with Neural Networks for Reinforcement Learning in Spoken Dialogue Systems

no code implementations13 Aug 2015 Pei-Hao Su, David Vandyke, Milica Gasic, Dongho Kim, Nikola Mrksic, Tsung-Hsien Wen, Steve Young

The models are trained on dialogues generated by a simulated user and the best model is then used to train a policy on-line which is shown to perform at least as well as a baseline system using prior knowledge of the user's task.

Spoken Dialogue Systems

Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems

2 code implementations EMNLP 2015 Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-Hao Su, David Vandyke, Steve Young

Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality.

Informativeness Sentence +2

Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking

no code implementations WS 2015 Tsung-Hsien Wen, Milica Gasic, Dongho Kim, Nikola Mrksic, Pei-Hao Su, David Vandyke, Steve Young

The natural language generation (NLG) component of a spoken dialogue system (SDS) usually needs a substantial amount of handcrafting or a well-labeled dataset to be trained on.

Sentence Text Generation

Detecting Document Structure in a Very Large Corpus of UK Financial Reports

no code implementations LREC 2014 Mahmoud El-Haj, Paul Rayson, Steve Young, Martin Walker

In this paper we present the evaluation of our automatic methods for detecting and extracting document structure in annual financial reports.

Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.