Data-to-text generation is the task of generating text from a data source.
( Image credit: Data-to-Text Generation with Content Selection and Planning )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
In this work, we propose a template rewriting method for Natural Language Generation (NLG), where the number of templates scales only linearly with the number of slots.
Ranked #2 on Data-to-Text Generation on MULTIWOZ 2.1
This paper summarises the experimental setup and results of the first shared task on end-to-end (E2E) natural language generation (NLG) in spoken dialogue systems.
Ranked #4 on Data-to-Text Generation on E2E NLG Challenge
This paper describes the E2E data, a new dataset for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area.
Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records.
Semantically controlled neural response generation on limited-domain has achieved great performance.
Ranked #5 on Data-to-Text Generation on MULTIWOZ 2.1
We follow the step-by-step approach to neural data-to-text generation we proposed in Moryossef et al (2019), in which the generation process is divided into a text-planning stage followed by a plan-realization stage.
We propose to split the generation process into a symbolic text-planning stage that is faithful to the input, followed by a neural generation stage that focuses only on realization.
Ranked #7 on Data-to-Text Generation on WebNLG
Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order.