Search Results for author: Jong Park

Found 9 papers, 3 papers with code

A Large-scale Comprehensive Abusiveness Detection Dataset with Multifaceted Labels from Reddit

1 code implementation CoNLL (EMNLP) 2021 Hoyun Song, Soo Hyun Ryu, Huije Lee, Jong Park

As users in online communities suffer from severe side effects of abusive language, many researchers attempted to detect abusive texts from social media, presenting several datasets for such detection.

Abusive Language Natural Language Understanding

Sign Language Production With Avatar Layering: A Critical Use Case over Rare Words

no code implementations LREC 2022 Jung-Ho Kim, Eui Jun Hwang, Sukmin Cho, Du Hui Lee, Jong Park

To address these problems, we introduce an avatar-based SLP system composed of a sign language translation (SLT) model and an avatar animation generation module.

Decoder Language Modelling +2

Query Generation with External Knowledge for Dense Retrieval

no code implementations DeeLIO (ACL) 2022 Sukmin Cho, Soyeong Jeong, Wonsuk Yang, Jong Park

The dense retriever with the queries requiring implicit information is found to make good performance improvement.

Language Modelling Retrieval

Generating Negative Samples by Manipulating Golden Responses for Unsupervised Learning of a Response Evaluation Model

1 code implementation NAACL 2021 ChaeHun Park, Eugene Jang, Wonsuk Yang, Jong Park

Reference-based metrics that rely on comparisons to a set of known correct responses often fail to account for this variety, and consequently correlate poorly with human judgment.

Dialogue Evaluation

Generating Sentential Arguments from Diverse Perspectives on Controversial Topic

1 code implementation WS 2019 ChaeHun Park, Wonsuk Yang, Jong Park

Considering diverse aspects of an argumentative issue is an essential step for mitigating a biased opinion and making reasonable decisions.

Retrieval

Computer Assisted Annotation of Tension Development in TED Talks through Crowdsourcing

no code implementations WS 2019 Seungwon Yoon, Wonsuk Yang, Jong Park

For the crowdsourced environment, we compared the annotation results with and without our method of machine-assisted annotation.

Cannot find the paper you are looking for? You can Submit a new open access paper.