1 code implementation • 8 Nov 2023 • Chenmien Tan, Ge Zhang, Jie Fu
While large language models (LLMs) have enabled learning knowledge from the pre-training corpora, the acquired knowledge may be fundamentally incorrect or outdated over time, which necessitates rectifying the knowledge of the language model (LM) after the training.
no code implementations • 16 Mar 2023 • Junqi Qian, Paul Weng, Chenmien Tan
LR4GPM alternates between two phases: (1) learning a (possibly vector) reward function used to fit the performance metric, and (2) training a policy to optimize an approximation of this performance metric based on the learned rewards.