1 code implementation • 24 May 2023 • Kangxi Wu, Liang Pang, HuaWei Shen, Xueqi Cheng, Tat-Seng Chua
By jointly analyzing the proxy perplexities of LLMs, we can determine the source of the generated text.
1 code implementation • 10 Jan 2023 • Yunchang Zhu, Liang Pang, Kangxi Wu, Yanyan Lan, HuaWei Shen, Xueqi Cheng
Comparative loss is essentially a ranking loss on top of the task-specific losses of the full and ablated models, with the expectation that the task-specific loss of the full model is minimal.