Automatic Post-Editing

25 papers with code • 0 benchmarks • 10 datasets

Automatic post-editing (APE) is used to correct errors in the translation made by the machine translation systems.

Automatic Correction of Human Translations

lilt/tec NAACL 2022

We show that human errors in TEC exhibit a more diverse range of errors and far fewer translation fluency errors than the MT errors in automatic post-editing datasets, suggesting the need for dedicated TEC models that are specialized to correct human errors.

18
17 Jun 2022

Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparative Study

osu-nlp-group/llm-cn-eval Findings (ACL) 2022

In this work, we present an extensive study on the use of pre-trained language models for the task of automatic Counter Narrative (CN) generation to fight online hate speech in English.

0
04 Apr 2022

Transfer Learning for Sequence Generation: from Single-source to Multi-source

THUNLP-MT/TRICE ACL 2021

Although directly finetuning pretrained models on MSG tasks and concatenating multiple sources into a single long sequence is regarded as a simple method to transfer pretrained models to MSG tasks, we conjecture that the direct finetuning method leads to catastrophic forgetting and solely relying on pretrained self-attention layers to capture cross-source information is not sufficient.

11
31 May 2021

Automatic Post-Editing for Vietnamese

tienthanhdhcn/VnAPE ALTA 2021

Automatic post-editing (APE) is an important remedy for reducing errors of raw translated texts that are produced by machine translation (MT) systems or software-aided translation.

11
25 Apr 2021

Adaptation of Back-translation to Automatic Post-Editing for Synthetic Data Generation

wonkeelee/ape-backtranslation EACL 2021

Automatic Post-Editing (APE) aims to correct errors in the output of a given machine translation (MT) system.

1
01 Apr 2021

Incorporating Terminology Constraints in Automatic Post-Editing

zerocstaker/constrained_ape WMT (EMNLP) 2020

In this paper, we present both autoregressive and non-autoregressive models for lexically constrained APE, demonstrating that our approach enables preservation of 95% of the terminologies and also improves translation quality on English-German benchmarks.

12
19 Oct 2020

MLQE-PE: A Multilingual Quality Estimation and Post-Editing Dataset

sheffieldnlp/mlqe-pe LREC 2022

We present MLQE-PE, a new dataset for Machine Translation (MT) Quality Estimation (QE) and Automatic Post-Editing (APE).

37
09 Oct 2020

Can Automatic Post-Editing Improve NMT?

shamilcm/pedra EMNLP 2020

To ascertain our hypothesis, we compile a larger corpus of human post-edits of English to German NMT.

14
30 Sep 2020

DynE: Dynamic Ensemble Decoding for Multi-Document Summarization

chrishokamp/dynamic-transformer-ensembles 15 Jun 2020

Sequence-to-sequence (s2s) models are the basis for extensive work in natural language processing.

29
15 Jun 2020

Learning Non-Monotonic Automatic Post-Editing of Translations from Human Orderings

antoniogois/keystrokes_ape EAMT 2020

Recent research in neural machine translation has explored flexible generation orders, as an alternative to left-to-right generation.

2
29 Apr 2020