Table-to-Text Generation
38 papers with code • 8 benchmarks • 6 datasets
Table-to-Text Generation is to generate a description from the structured table.
Source: Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation
Latest papers
PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation
However, directly learning the logical inference knowledge from table-text pairs is very difficult for neural models because of the ambiguity of natural language and the scarcity of parallel data.
Arithmetic-Based Pretraining -- Improving Numeracy of Pretrained Language Models
In this paper, we propose a new extended pretraining approach called Arithmetic-Based Pretraining that jointly addresses both in one extended pretraining step without requiring architectural changes or pretraining from scratch.
Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning
This prunes the full self-attention structure into an order-invariant graph attention that captures the connected graph structure of cells belonging to the same row or column, and it differentiates between relevant cells and irrelevant cells from the structural perspective.
Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation in Few Shots
Few-shot table-to-text generation is a task of composing fluent and faithful sentences to convey table content using limited data.
NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics
To enable constrained generation, we build on NeuroLogic decoding (Lu et al., 2021), combining its flexibility in incorporating logical constraints with A*esque estimates of future constraint satisfaction.
Few-Shot Table-to-Text Generation with Prototype Memory
Neural table-to-text generation models have achieved remarkable progress on an array of tasks.
Improving Encoder by Auxiliary Supervision Tasks for Table-to-Text Generation
However, it is hard for a vanilla encoder to capture these.
Towards Table-to-Text Generation with Numerical Reasoning
In summary, our contributions are (1) a new dataset for numerical table-to-text generation using pairs of a table and a paragraph of a table description with richer inference from scientific papers, and (2) a table-to-text generation framework enriched with numerical reasoning.
How Helpful is Inverse Reinforcement Learning for Table-to-Text Generation?
Many approaches to this problem use Reinforcement Learning (RL), which maximizes a single manually defined reward, such as BLEU.
Controlling Text Edition by Changing Answers of Specific Questions
Experimental results on the test set show that our proposed method is a good fit for this novel NLP task.