Search Results for author: Lau Jia Jaw

Found 1 papers, 0 papers with code

Fine-tuning Language Models with Generative Adversarial Reward Modelling

no code implementations9 May 2023 Zhang Ze Yu, Lau Jia Jaw, Zhang Hui, Bryan Kian Hsiang Low

Reinforcement Learning with Human Feedback (RLHF) has been demonstrated to significantly enhance the performance of large language models (LLMs) by aligning their outputs with desired human values through instruction tuning.

reinforcement-learning

Cannot find the paper you are looking for? You can Submit a new open access paper.