Table-to-Text Generation

38 papers with code • 8 benchmarks • 6 datasets

Table-to-Text Generation is to generate a description from the structured table.

Source: Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation

Latest papers with no code

Medical Scientific Table-to-Text Generation with Human-in-the-Loop under the Data Sparsity Constraint

no code yet • 24 May 2022

Structured (tabular) data in the preclinical and clinical domains contains valuable information about individuals and an efficient table-to-text summarization system can drastically reduce manual efforts to condense this data into reports.

Diversity Enhanced Table-to-Text Generation via Type Control

no code yet • 22 May 2022

Generating natural language statements to convey logical inferences from tabular data (i. e., Logical NLG) is a process with one input and a variety of valid outputs.

Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning

no code yet • ACL ARR January 2022

Our framework also modifies the positional encoding mechanism to preserve the relative position of tokens in the same cell but enforce position invariance among different cells.

FLAP: Table-to-Text Generation with Feature Indication and Numerical Reasoning Pretraining

no code yet • ACL ARR November 2021

In this paper, we propose an effective framework with Feature indication and numericaL reAsoning Pretraining (FLAP) to help the neural generation model on content selection and planning.

De-Confounded Variational Encoder-Decoder for Logical Table-to-Text Generation

no code yet • ACL 2021

The task remains challenging where deep learning models often generated linguistically fluent but logically inconsistent text.

HTLM: Hyper-Text Pre-Training and Prompting of Language Models

no code yet • ICLR 2022

We introduce HTLM, a hyper-text language model trained on a large-scale web crawl.

Sketch and Refine: Towards Faithful and Informative Table-to-Text Generation

no code yet • Findings (ACL) 2021

Experimental results demonstrate that our method outperforms the previous state-of-the-art methods in both automatic and human evaluation, especially on coverage and faithfulness.

Structural Encoding and Pre-training Matter: Adapting BERT for Table-Based Fact Verification

no code yet • EACL 2021

Starting from the Table Parsing (TAPAS) model developed for question answering (Herzig et al., 2020), we find that modeling table structure improves a language model pre-trained on unstructured text.

Learning Better Representation for Tables by Self-Supervised Tasks

no code yet • 15 Oct 2020

Secondly, the target texts in training dataset may contain redundant information or facts do not exist in the input tables.

Towards Faithful Neural Table-to-Text Generation with Content-Matching Constraints

no code yet • ACL 2020

Text generation from a knowledge base aims to translate knowledge triples to natural language descriptions.