Search Results for author: Kehai Chen

Found 38 papers, 7 papers with code

Synchronous Refinement for Neural Machine Translation

no code implementations Findings (ACL) 2022 Kehai Chen, Masao Utiyama, Eiichiro Sumita, Rui Wang, Min Zhang

Machine translation typically adopts an encoder-to-decoder framework, in which the decoder generates the target sentence word-by-word in an auto-regressive manner.

Machine Translation Sentence +1

Syntax in End-to-End Natural Language Processing

no code implementations EMNLP (ACL) 2021 Hai Zhao, Rui Wang, Kehai Chen

This tutorial surveys the latest technical progress of syntactic parsing and the role of syntax in end-to-end natural language processing (NLP) tasks, in which semantic role labeling (SRL) and machine translation (MT) are the representative NLP tasks that have always been beneficial from informative syntactic clues since a long time ago, though the advance from end-to-end deep learning models shows new results.

Machine Translation NMT +2

Unsupervised Sign Language Translation and Generation

no code implementations12 Feb 2024 Zhengsheng Guo, Zhiwei He, Wenxiang Jiao, Xing Wang, Rui Wang, Kehai Chen, Zhaopeng Tu, Yong Xu, Min Zhang

Motivated by the success of unsupervised neural machine translation (UNMT), we introduce an unsupervised sign language translation and generation network (USLNet), which learns from abundant single-modality (text and video) data without parallel sign language data.

Machine Translation Sign Language Translation +1

When Large Language Models Meet Vector Databases: A Survey

no code implementations30 Jan 2024 Zhi Jing, Yongye Su, Yikun Han, Bo Yuan, Haiyun Xu, Chunjiang Liu, Kehai Chen, Min Zhang

This survey explores the synergistic potential of Large Language Models (LLMs) and Vector Databases (VecDBs), a burgeoning but rapidly evolving research area.

Hallucination Information Retrieval +1

Context Consistency between Training and Testing in Simultaneous Machine Translation

1 code implementation13 Nov 2023 Meizhi Zhong, Lemao Liu, Kehai Chen, Mingming Yang, Min Zhang

Simultaneous Machine Translation (SiMT) aims to yield a real-time partial translation with a monotonically growing the source-side context.

Machine Translation Translation

Discriminative Reasoning for Document-level Relation Extraction

2 code implementations Findings (ACL) 2021 Wang Xu, Kehai Chen, Tiejun Zhao

Document-level relation extraction (DocRE) models generally use graph networks to implicitly model the reasoning skill (i. e., pattern recognition, logical reasoning, coreference reasoning, etc.)

Document-level Relation Extraction Logical Reasoning +1

Text Compression-aided Transformer Encoding

no code implementations11 Feb 2021 Zuchao Li, Zhuosheng Zhang, Hai Zhao, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita

In this paper, we propose explicit and implicit text compression approaches to enhance the Transformer encoding and evaluate models using this approach on several typical downstream tasks that rely on the encoding heavily.

Text Compression

Prior Knowledge Representation for Self-Attention Networks

no code implementations1 Jan 2021 Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita

Self-attention networks (SANs) have shown promising empirical results in various natural language processing tasks.

Translation

Document-Level Relation Extraction with Reconstruction

1 code implementation21 Dec 2020 Wang Xu, Kehai Chen, Tiejun Zhao

In document-level relation extraction (DocRE), graph structure is generally used to encode relation information in the input document to classify the relation category between each entity pair, and has greatly advanced the DocRE task over the past several years.

Document-level Relation Extraction Relation +1

Robust Machine Reading Comprehension by Learning Soft labels

no code implementations COLING 2020 Zhenyu Zhao, Shuangzhi Wu, Muyun Yang, Kehai Chen, Tiejun Zhao

Neural models have achieved great success on the task of machine reading comprehension (MRC), which are typically trained on hard labels.

Machine Reading Comprehension

Content Word Aware Neural Machine Translation

no code implementations ACL 2020 Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita

Neural machine translation (NMT) encodes the source sentence in a universal way to generate the target sentence word-by-word.

Machine Translation NMT +2

Data-dependent Gaussian Prior Objective for Language Generation

no code implementations ICLR 2020 Zuchao Li, Rui Wang, Kehai Chen, Masso Utiyama, Eiichiro Sumita, Zhuosheng Zhang, Hai Zhao

However, MLE focuses on once-to-all matching between the predicted sequence and gold-standard, consequently treating all incorrect predictions as being equally incorrect.

Image Captioning L2 Regularization +4

Neural Machine Translation with Universal Visual Representation

1 code implementation ICLR 2020 Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, Hai Zhao

Though visual information has been introduced for enhancing neural machine translation (NMT), its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations.

Machine Translation NMT +2

Self-Training for Unsupervised Neural Machine Translation in Unbalanced Training Data Scenarios

no code implementations NAACL 2021 Haipeng Sun, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Tiejun Zhao

Unsupervised neural machine translation (UNMT) that relies solely on massive monolingual corpora has achieved remarkable results in several translation tasks.

Machine Translation Translation

Explicit Reordering for Neural Machine Translation

no code implementations8 Apr 2020 Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita

Thus, we propose a novel reordering method to explicitly model this reordering information for the Transformer-based NMT.

Machine Translation NMT +2

Modeling Future Cost for Neural Machine Translation

no code implementations28 Feb 2020 Chaoqun Duan, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Conghui Zhu, Tiejun Zhao

Existing neural machine translation (NMT) systems utilize sequence-to-sequence neural networks to generate target translation word by word, and then make the generated word at each time-step and the counterpart in the references as consistent as possible.

Machine Translation NMT +1

Explicit Sentence Compression for Neural Machine Translation

1 code implementation27 Dec 2019 Zuchao Li, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Zhuosheng Zhang, Hai Zhao

In this paper, we propose an explicit sentence compression method to enhance the source sentence representation for NMT.

Machine Translation NMT +3

Probing Contextualized Sentence Representations with Visual Awareness

no code implementations7 Nov 2019 Zhuosheng Zhang, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Hai Zhao

We present a universal framework to model contextualized sentence representations with visual awareness that is motivated to overcome the shortcomings of the multimodal parallel data with manual annotations.

Machine Translation Natural Language Inference +2

English-Myanmar Supervised and Unsupervised NMT: NICT's Machine Translation Systems at WAT-2019

no code implementations WS 2019 Rui Wang, Haipeng Sun, Kehai Chen, Chenchen Ding, Masao Utiyama, Eiichiro Sumita

This paper presents the NICT{'}s participation (team ID: NICT) in the 6th Workshop on Asian Translation (WAT-2019) shared translation task, specifically Myanmar (Burmese) - English task in both translation directions.

Language Modelling Machine Translation +2

Document-level Neural Machine Translation with Associated Memory Network

no code implementations31 Oct 2019 Shu Jiang, Rui Wang, Zuchao Li, Masao Utiyama, Kehai Chen, Eiichiro Sumita, Hai Zhao, Bao-liang Lu

Most existing document-level NMT approaches are satisfied with a smattering sense of global document-level information, while this work focuses on exploiting detailed document-level context in terms of a memory network.

Machine Translation NMT +2

Revisiting Simple Domain Adaptation Methods in Unsupervised Neural Machine Translation

no code implementations26 Aug 2019 Haipeng Sun, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Tiejun Zhao, Chenhui Chu

However, it has not been well-studied for unsupervised neural machine translation (UNMT), although UNMT has recently achieved remarkable results in several domain-specific language pairs.

Domain Adaptation Machine Translation +1

NICT's Supervised Neural Machine Translation Systems for the WMT19 News Translation Task

no code implementations WS 2019 Raj Dabre, Kehai Chen, Benjamin Marie, Rui Wang, Atsushi Fujita, Masao Utiyama, Eiichiro Sumita

In this paper, we describe our supervised neural machine translation (NMT) systems that we developed for the news translation task for Kazakh↔English, Gujarati↔English, Chinese↔English, and English→Finnish translation directions.

Machine Translation NMT +2

Unsupervised Bilingual Word Embedding Agreement for Unsupervised Neural Machine Translation

no code implementations ACL 2019 Haipeng Sun, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Tiejun Zhao

In previous methods, UBWE is first trained using non-parallel monolingual corpora and then this pre-trained UBWE is used to initialize the word embedding in the encoder and decoder of UNMT.

Denoising Machine Translation +1

Sentence-Level Agreement for Neural Machine Translation

no code implementations ACL 2019 Mingming Yang, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Min Zhang, Tiejun Zhao

The training objective of neural machine translation (NMT) is to minimize the loss between the words in the translated sentences and those in the references.

Machine Translation NMT +2

Lattice-Based Transformer Encoder for Neural Machine Translation

no code implementations ACL 2019 Fengshun Xiao, Jiangtong Li, Hai Zhao, Rui Wang, Kehai Chen

To integrate different segmentations with the state-of-the-art NMT model, Transformer, we propose lattice-based encoders to explore effective word or subword representation in an automatic way during training.

Machine Translation NMT +1

Syntax-Directed Attention for Neural Machine Translation

no code implementations12 Nov 2017 Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Tiejun Zhao

In this paper, we extend local attention with syntax-distance constraint, to focus on syntactically related source words with the predicted target word, thus learning a more effective context vector for word prediction.

Machine Translation NMT +1

Context-Aware Smoothing for Neural Machine Translation

no code implementations IJCNLP 2017 Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Tiejun Zhao

In Neural Machine Translation (NMT), each word is represented as a low-dimension, real-value vector for encoding its syntax and semantic information.

Machine Translation NMT +3

Cannot find the paper you are looking for? You can Submit a new open access paper.