Search Results for author: Jong-Hyeok Lee

Found 22 papers, 3 papers with code

Noising Scheme for Data Augmentation in Automatic Post-Editing

no code implementations WMT (EMNLP) 2020 WonKee Lee, Jaehun Shin, Baikjin Jung, Jihyung Lee, Jong-Hyeok Lee

In our experiment, we implemented a noising module that simulates four types of post-editing errors, and we introduced this module into a Transformer-based multi-source APE model.

Automatic Post-Editing Data Augmentation +1

POSTECH-ETRI’s Submission to the WMT2020 APE Shared Task: Automatic Post-Editing with Cross-lingual Language Model

no code implementations WMT (EMNLP) 2020 Jihyung Lee, WonKee Lee, Jaehun Shin, Baikjin Jung, Young-Kil Kim, Jong-Hyeok Lee

This paper describes POSTECH-ETRI’s submission to WMT2020 for the shared task on automatic post-editing (APE) for 2 language pairs: English-German (En-De) and English-Chinese (En-Zh).

Automatic Post-Editing Language Modelling +2

Quality Estimation Using Dual Encoders with Transfer Learning

no code implementations WMT (EMNLP) 2021 Dam Heo, WonKee Lee, Baikjin Jung, Jong-Hyeok Lee

This paper describes POSTECH’s quality estimation systems submitted to Task 2 of the WMT 2021 quality estimation shared task: Word and Sentence-Level Post-editing Effort.

Machine Translation Sentence +3

Tag Assisted Neural Machine Translation of Film Subtitles

1 code implementation ACL (IWSLT) 2021 Aren Siekmeier, WonKee Lee, HongSeok Kwon, Jong-Hyeok Lee

We implemented a neural machine translation system that uses automatic sequence tagging to improve the quality of translation.

Machine Translation Sentence +2

Towards Semi-Supervised Learning of Automatic Post-Editing: Data-Synthesis by Infilling Mask with Erroneous Tokens

no code implementations8 Apr 2022 WonKee Lee, Seong-Hwan Heo, Baikjin Jung, Jong-Hyeok Lee

Semi-supervised learning that leverages synthetic training data has been widely adopted in the field of Automatic post-editing (APE) to overcome the lack of human-annotated training data.

Automatic Post-Editing Language Modelling +1

mcBERT: Momentum Contrastive Learning with BERT for Zero-Shot Slot Filling

no code implementations24 Mar 2022 Seong-Hwan Heo, WonKee Lee, Jong-Hyeok Lee

Zero-shot slot filling has received considerable attention to cope with the problem of limited available data for the target domain.

Contrastive Learning slot-filling +2

POSTECH Submission on Duolingo Shared Task

no code implementations WS 2020 Junsu Park, Hong-Seok Kwon, Jong-Hyeok Lee

In this paper, we propose a transfer learning based simultaneous translation model by extending BART.

Transfer Learning Translation

Modeling Inter-Speaker Relationship in XLNet for Contextual Spoken Language Understanding

no code implementations28 Oct 2019 Jonggu Kim, Jong-Hyeok Lee

We propose two methods to capture relevant history information in a multi-turn dialogue by modeling inter-speaker relationship for spoken language understanding (SLU).

Spoken Language Understanding

Transformer-based Automatic Post-Editing with a Context-Aware Encoding Approach for Multi-Source Inputs

no code implementations15 Aug 2019 WonKee Lee, Junsu Park, Byung-Hyun Go, Jong-Hyeok Lee

Recent approaches to the Automatic Post-Editing (APE) research have shown that better results are obtained by multi-source models, which jointly encode both source (src) and machine translation output (mt) to produce post-edited sentence (pe).

Automatic Post-Editing Sentence +2

Decay-Function-Free Time-Aware Attention to Context and Speaker Indicator for Spoken Language Understanding

1 code implementation NAACL 2019 Jonggu Kim, Jong-Hyeok Lee

To capture salient contextual information for spoken language understanding (SLU) of a dialogue, we propose time-aware models that automatically learn the latent time-decay function of the history without a manual time-decay function.

dialog state tracking Spoken Language Understanding

Multi-encoder Transformer Network for Automatic Post-Editing

no code implementations WS 2018 Jaehun Shin, Jong-Hyeok Lee

This paper describes the POSTECH{'}s submission to the WMT 2018 shared task on Automatic Post-Editing (APE).

Automatic Post-Editing NMT +1

Self-Attention-Based Message-Relevant Response Generation for Neural Conversation Model

no code implementations23 May 2018 Jonggu Kim, Doyeon Kong, Jong-Hyeok Lee

Using a sequence-to-sequence framework, many neural conversation models for chit-chat succeed in naturalness of the response.

Dialogue Generation Response Generation

Multiple Range-Restricted Bidirectional Gated Recurrent Units with Attention for Relation Classification

no code implementations5 Jul 2017 Jonggu Kim, Jong-Hyeok Lee

Most of neural approaches to relation classification have focused on finding short patterns that represent the semantic relation using Convolutional Neural Networks (CNNs) and those approaches have generally achieved better performances than using Recurrent Neural Networks (RNNs).

Classification General Classification +3

Improving Term Frequency Normalization for Multi-topical Documents, and Application to Language Modeling Approaches

no code implementations8 Feb 2015 Seung-Hoon Na, In-Su Kang, Jong-Hyeok Lee

Although these document characteristics should be differently handled, all previous methods of term frequency normalization have ignored these differences and have used a simplified length-driven approach which decreases the term frequency by only the length of a document, causing an unreasonable penalization.

Language Modelling Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.