Data-to-text Generation with Macro Planning

4 Feb 2021  ยท  Ratish Puduppully, Mirella Lapata ยท

Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or variants thereof. These models generate text which is fluent (but often imprecise) and perform quite poorly at selecting appropriate content and ordering it coherently. To overcome some of these issues, we propose a neural model with a macro planning stage followed by a generation stage reminiscent of traditional methods which embrace separate modules for planning and surface realization. Macro plans represent high level organization of important content such as entities, events and their interactions; they are learnt from data and given as input to the generator. Extensive experiments on two data-to-text benchmarks (RotoWire and MLB) show that our approach outperforms competitive baselines in terms of automatic and human evaluation.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Data-to-Text Generation MLB Dataset Macro BLEU 12.62 # 2
Data-to-Text Generation MLB Dataset (Content Ordering) Macro DLD 21.8 # 2
Data-to-Text Generation MLB Dataset (Content Ordering) ENT DLD 20.7 # 4
Data-to-Text Generation MLB Dataset (Content Selection) Macro Precision 40.8 # 3
Recall 54.9 # 1
Data-to-Text Generation MLB Dataset (Relation Generation) Macro Precision 94.4 # 2
count 30.8 # 1
Data-to-Text Generation MLB Dataset (Relation Generation) ENT Precision 81.1 # 4
count 23.8 # 3
Data-to-Text Generation RotoWire Macro BLEU 15.46 # 5
Data-to-Text Generation RotoWire (Content Ordering) Macro DLD 17.7% # 3
Data-to-Text Generation Rotowire (Content Selection) Macro Precision 34.1% # 4
Recall 57.8% # 1
Data-to-Text Generation RotoWire (Relation Generation) Macro count 42.1 # 2
Precision 97.6 # 1

Methods


No methods listed for this paper. Add relevant methods here