Table-to-Text Generation
38 papers with code • 8 benchmarks • 6 datasets
Table-to-Text Generation is to generate a description from the structured table.
Source: Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation
Latest papers with no code
Medical Scientific Table-to-Text Generation with Human-in-the-Loop under the Data Sparsity Constraint
Structured (tabular) data in the preclinical and clinical domains contains valuable information about individuals and an efficient table-to-text summarization system can drastically reduce manual efforts to condense this data into reports.
Diversity Enhanced Table-to-Text Generation via Type Control
Generating natural language statements to convey logical inferences from tabular data (i. e., Logical NLG) is a process with one input and a variety of valid outputs.
Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning
Our framework also modifies the positional encoding mechanism to preserve the relative position of tokens in the same cell but enforce position invariance among different cells.
FLAP: Table-to-Text Generation with Feature Indication and Numerical Reasoning Pretraining
In this paper, we propose an effective framework with Feature indication and numericaL reAsoning Pretraining (FLAP) to help the neural generation model on content selection and planning.
De-Confounded Variational Encoder-Decoder for Logical Table-to-Text Generation
The task remains challenging where deep learning models often generated linguistically fluent but logically inconsistent text.
HTLM: Hyper-Text Pre-Training and Prompting of Language Models
We introduce HTLM, a hyper-text language model trained on a large-scale web crawl.
Sketch and Refine: Towards Faithful and Informative Table-to-Text Generation
Experimental results demonstrate that our method outperforms the previous state-of-the-art methods in both automatic and human evaluation, especially on coverage and faithfulness.
Structural Encoding and Pre-training Matter: Adapting BERT for Table-Based Fact Verification
Starting from the Table Parsing (TAPAS) model developed for question answering (Herzig et al., 2020), we find that modeling table structure improves a language model pre-trained on unstructured text.
Learning Better Representation for Tables by Self-Supervised Tasks
Secondly, the target texts in training dataset may contain redundant information or facts do not exist in the input tables.
Towards Faithful Neural Table-to-Text Generation with Content-Matching Constraints
Text generation from a knowledge base aims to translate knowledge triples to natural language descriptions.