Search Results for author: Hongfei Xu

Found 18 papers, 2 papers with code

Self-Supervised Curriculum Learning for Spelling Error Correction

no code implementations EMNLP 2021 Zifa Gan, Hongfei Xu, Hongying Zan

By contrast, Curriculum Learning (CL) utilizes training data differently during training and has shown its effectiveness in improving both performance and training efficiency in many other NLP tasks.

NMT

ParaZh-22M: A Large-Scale Chinese Parabank via Machine Translation

no code implementations COLING 2022 Wenjie Hao, Hongfei Xu, Deyi Xiong, Hongying Zan, Lingling Mu

Paraphrasing, i. e., restating the same meaning in different ways, is an important data augmentation approach for natural language processing (NLP).

Data Augmentation Machine Translation +3

Knowledge-injected Prompt Learning for Chinese Biomedical Entity Normalization

no code implementations23 Aug 2023 Songhua Yang, Chenghao Zhang, Hongfei Xu, Yuxiang Jia

However, existing research falls short in tackling the more complex Chinese BEN task, especially in the few-shot scenario with limited medical data, and the vast potential of the external medical knowledge base has yet to be fully harnessed.

Optimizing Deep Transformers for Chinese-Thai Low-Resource Translation

no code implementations24 Dec 2022 Wenjie Hao, Hongfei Xu, Lingling Mu, Hongying Zan

In this paper, we study the use of deep Transformer translation model for the CCMT 2022 Chinese-Thai low-resource machine translation task.

Machine Translation Translation

NAPG: Non-Autoregressive Program Generation for Hybrid Tabular-Textual Question Answering

no code implementations7 Nov 2022 Tengxun Zhang, Hongfei Xu, Josef van Genabith, Deyi Xiong, Hongying Zan

Hybrid tabular-textual question answering (QA) requires reasoning from heterogeneous information, and the types of reasoning are mainly divided into numerical reasoning and span extraction.

Question Answering

Multi-Head Highly Parallelized LSTM Decoder for Neural Machine Translation

no code implementations ACL 2021 Hongfei Xu, Qiuhui Liu, Josef van Genabith, Deyi Xiong, Meng Zhang

This has to be computed n times for a sequence of length n. The linear transformations involved in the LSTM gate and state computations are the major cost factors in this.

Machine Translation Translation

Rewiring the Transformer with Depth-Wise LSTMs

no code implementations13 Jul 2020 Hongfei Xu, Yang song, Qiuhui Liu, Josef van Genabith, Deyi Xiong

Stacking non-linear layers allows deep neural networks to model complicated functions, and including residual connections in Transformer layers is beneficial for convergence and performance.

NMT Time Series Analysis

Learning Source Phrase Representations for Neural Machine Translation

no code implementations ACL 2020 Hongfei Xu, Josef van Genabith, Deyi Xiong, Qiuhui Liu, Jingyi Zhang

Considering that modeling phrases instead of words has significantly improved the Statistical Machine Translation (SMT) approach through the use of larger translation blocks ("phrases") and its reordering ability, modeling NMT at phrase level is an intuitive proposal to help the model capture long-distance relationships.

Machine Translation NMT +1

Dynamically Adjusting Transformer Batch Size by Monitoring Gradient Direction Change

no code implementations ACL 2020 Hongfei Xu, Josef van Genabith, Deyi Xiong, Qiuhui Liu

We propose to automatically and dynamically determine batch sizes by accumulating gradients of mini-batches and performing an optimization step at just the time when the direction of gradients starts to fluctuate.

Probing Word Translations in the Transformer and Trading Decoder for Encoder Layers

no code implementations NAACL 2021 Hongfei Xu, Josef van Genabith, Qiuhui Liu, Deyi Xiong

Due to its effectiveness and performance, the Transformer translation model has attracted wide attention, most recently in terms of probing-based approaches.

Translation Word Translation

Lipschitz Constrained Parameter Initialization for Deep Transformers

no code implementations ACL 2020 Hongfei Xu, Qiuhui Liu, Josef van Genabith, Deyi Xiong, Jingyi Zhang

In this paper, we first empirically demonstrate that a simple modification made in the official implementation, which changes the computation order of residual connection and layer normalization, can significantly ease the optimization of deep Transformers.

Translation

The Transference Architecture for Automatic Post-Editing

no code implementations COLING 2020 Santanu Pal, Hongfei Xu, Nico Herbig, Sudip Kumar Naskar, Antonio Krueger, Josef van Genabith

In automatic post-editing (APE) it makes sense to condition post-editing (pe) decisions on both the source (src) and the machine translated text (mt) as input.

Automatic Post-Editing NMT

USAAR-DFKI -- The Transference Architecture for English--German Automatic Post-Editing

no code implementations WS 2019 Santanu Pal, Hongfei Xu, Nico Herbig, Antonio Kr{\"u}ger, Josef van Genabith

In this paper we present an English{--}German Automatic Post-Editing (APE) system called transference, submitted to the APE Task organized at WMT 2019.

Automatic Post-Editing Translation

Neutron: An Implementation of the Transformer Translation Model and its Variants

2 code implementations18 Mar 2019 Hongfei Xu, Qiuhui Liu

The Transformer translation model is easier to parallelize and provides better performance compared to recurrent seq2seq models, which makes it popular among industry and research community.

Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.