Search Results for author: Yunchang Zhu

Found 4 papers, 4 papers with code

Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language Understanding

1 code implementation10 Jan 2023 Yunchang Zhu, Liang Pang, Kangxi Wu, Yanyan Lan, HuaWei Shen, Xueqi Cheng

Comparative loss is essentially a ranking loss on top of the task-specific losses of the full and ablated models, with the expectation that the task-specific loss of the full model is minimal.

Natural Language Understanding Network Pruning

LoL: A Comparative Regularization Loss over Query Reformulation Losses for Pseudo-Relevance Feedback

1 code implementation25 Apr 2022 Yunchang Zhu, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng

Ideally, if a PRF model can distinguish between irrelevant and relevant information in the feedback, the more feedback documents there are, the better the revised query will be.

Retrieval

L2R2: Leveraging Ranking for Abductive Reasoning

1 code implementation22 May 2020 Yunchang Zhu, Liang Pang, Yanyan Lan, Xue-Qi Cheng

To fill this gap, we switch to a ranking perspective that sorts the hypotheses in order of their plausibilities.

Language Modelling Learning-To-Rank +1

Cannot find the paper you are looking for? You can Submit a new open access paper.