Search Results for author: Jushi Kai

Found 2 papers, 2 papers with code

SH2: Self-Highlighted Hesitation Helps You Decode More Truthfully

1 code implementation11 Jan 2024 Jushi Kai, Hai Hu, Zhouhan Lin

Therefore, we propose to ''highlight'' the factual information by selecting the tokens with the lowest probabilities and concatenating them to the original context, thus forcing the model to repeatedly read and hesitate on these tokens before generation.

Hallucination Text Generation

Syntax-guided Localized Self-attention by Constituency Syntactic Distance

1 code implementation21 Oct 2022 Shengyuan Hou, Jushi Kai, Haotian Xue, Bingyu Zhu, Bo Yuan, Longtao Huang, Xinbing Wang, Zhouhan Lin

Recent works have revealed that Transformers are implicitly learning the syntactic information in its lower layers from data, albeit is highly dependent on the quality and scale of the training data.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.