1 code implementation • 15 Apr 2024 • Yangyifan Xu, Jinliang Lu, Jiajun Zhang
Ensembling different large language models (LLMs) to unleash their complementary potential and harness their individual strengths is highly valuable.
1 code implementation • ACL 2021 • Yangyifan Xu, Yijin Liu, Fandong Meng, Jiajun Zhang, Jinan Xu, Jie zhou
Recently, token-level adaptive training has achieved promising improvement in machine translation, where the cross-entropy loss function is adjusted by assigning different training weights to different tokens, in order to alleviate the token imbalance problem.