Search Results for author: Kangxi Wu

Found 2 papers, 2 papers with code

Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language Understanding

1 code implementation10 Jan 2023 Yunchang Zhu, Liang Pang, Kangxi Wu, Yanyan Lan, HuaWei Shen, Xueqi Cheng

Comparative loss is essentially a ranking loss on top of the task-specific losses of the full and ablated models, with the expectation that the task-specific loss of the full model is minimal.

Natural Language Understanding Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.