no code implementations • ACL (WAT) 2021 • Chanjun Park, Jaehyung Seo, Seolhwa Lee, Chanhee Lee, Hyeonseok Moon, Sugyeong Eo, Heuiseok Lim
Automatic speech recognition (ASR) is arguably the most critical component of such systems, as errors in speech recognition propagate to the downstream components and drastically degrade the user experience.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • LREC 2022 • Chanjun Park, Seolhwa Lee, Jaehyung Seo, Hyeonseok Moon, Sugyeong Eo, Heuiseok Lim
In recent years, there has been an increasing need for the restoration and translation of historical languages.
no code implementations • LREC 2022 • Hyeonseok Moon, Chanjun Park, Seolhwa Lee, Jaehyung Seo, Jungseob Lee, Sugyeong Eo, Heuiseok Lim
This study has several limitations, considering the data acquisition, because there is no official dataset for most language pairs.
no code implementations • MTSummit 2021 • Sugyeong Eo, Chanjun Park, Hyeonseok Moon, Jaehyung Seo, Heuiseok Lim
In quality estimation (QE), the quality of translation can be predicted by referencing the source sentence and the machine translation (MT) output without access to the reference sentence.
no code implementations • Findings (NAACL) 2022 • Jaehyung Seo, Seounghoon Lee, Chanjun Park, Yoonna Jang, Hyeonseok Moon, Sugyeong Eo, Seonmin Koo, Heuiseok Lim
However, Korean pretrained language models still struggle to generate a short sentence with a given condition based on compositionality and commonsense reasoning (i. e., generative commonsense reasoning).
no code implementations • CCGPK (COLING) 2022 • SeungYoon Lee, Jungseob Lee, Chanjun Park, Sugyeong Eo, Hyeonseok Moon, Jaehyung Seo, Jeongbae Park, Heuiseok Lim
As a result of the experiment, we present that the FoCus model could not correctly blend the knowledge according to the input dialogue and that the dataset design is unsuitable for the multi-turn conversation.
no code implementations • 25 Apr 2024 • Hyeonseok Moon, SeungYoon Lee, Seongtae Hong, Seungjun Lee, Chanjun Park, Heuiseok Lim
In our MT pipeline, all the components in a data point are concatenated to form a single translation sequence and subsequently reconstructed to the data components after translation.
no code implementations • 26 Jan 2024 • Seonmin Koo, Chanjun Park, Jinsung Kim, Jaehyung Seo, Sugyeong Eo, Hyeonseok Moon, Heuiseok Lim
To effectively address this, it is imperative to consider both the speech-level, crucial for recognition accuracy, and the text-level, critical for user-friendliness.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 26 Jun 2023 • Seugnjun Lee, Hyeonseok Moon, Chanjun Park, Heuiseok Lim
In this paper, we introduce a data-driven approach for Formality-Sensitive Machine Translation (FSMT) that caters to the unique linguistic properties of four target languages.
no code implementations • 26 Jun 2023 • Chanjun Park, Seonmin Koo, Seolhwa Lee, Jaehyung Seo, Sugyeong Eo, Hyeonseok Moon, Heuiseok Lim
Data-centric AI approach aims to enhance the model performance without modifying the model and has been shown to impact model performance positively.
1 code implementation • 11 Jun 2023 • Sugyeong Eo, Hyeonseok Moon, Jinsung Kim, Yuna Hur, Jeongwook Kim, Songeun Lee, Changwoo Chun, Sungsoo Park, Heuiseok Lim
In this paper, we propose a QAG framework that enhances QA type diversity by producing different interrogative sentences and implicit/explicit answers.
no code implementations • 20 Mar 2023 • Chanjun Park, Hyeonseok Moon, Seolhwa Lee, Jaehyung Seo, Sugyeong Eo, Heuiseok Lim
Leaderboard systems allow researchers to objectively evaluate Natural Language Processing (NLP) models and are typically used to identify models that exhibit superior performance on a given task in a predetermined setting.
no code implementations • COLING 2022 • Sugyeong Eo, Chanjun Park, Hyeonseok Moon, Jaehyung Seo, Gyeongmin Kim, Jungseob Lee, Heuiseok Lim
With the recent advance in neural machine translation demonstrating its importance, research on quality estimation (QE) has been steadily progressing.
no code implementations • 24 Nov 2021 • Hyeonseok Moon, Chanjun Park, Sugyeong Eo, Jaehyung Seo, Seungjun Lee, Heuiseok Lim
Data building for automatic post-editing (APE) requires extensive and expert-level human effort, as it contains an elaborate process that involves identifying errors in sentences and providing suitable revisions.
no code implementations • 1 Nov 2021 • Sugyeong Eo, Chanjun Park, Jaehyung Seo, Hyeonseok Moon, Heuiseok Lim
Building of data for quality estimation (QE) training is expensive and requires significant human labor.
no code implementations • 30 Oct 2021 • Jaehyung Seo, Chanjun Park, Sugyeong Eo, Hyeonseok Moon, Heuiseok Lim
Generative commonsense reasoning is the capability of a language model to generate a sentence with a given concept-set that is based on commonsense knowledge.
no code implementations • 30 Oct 2021 • Chanjun Park, Seolhwa Lee, Hyeonseok Moon, Sugyeong Eo, Jaehyung Seo, Heuiseok Lim
This paper proposes a tool for efficiently constructing high-quality parallel corpora with minimizing human labor and making this tool publicly available.
no code implementations • 28 Oct 2021 • Chanjun Park, Midan Shim, Sugyeong Eo, Seolhwa Lee, Jaehyung Seo, Hyeonseok Moon, Heuiseok Lim
To the best of our knowledge, this study is the first to use LIWC to analyze parallel corpora in the field of NMT.
no code implementations • NAACL 2021 • Chanjun Park, Sugyeong Eo, Hyeonseok Moon, Heuiseok Lim
We derive an optimal subword tokenization result for Korean-English machine translation by conducting a case study that combines the subword tokenization method, morphological segmentation, and vocabulary method.