Search Results for author: Rauno Arike

Found 1 papers, 0 papers with code

Beyond Training Objectives: Interpreting Reward Model Divergence in Large Language Models

no code implementations12 Oct 2023 Luke Marks, Amir Abdullah, Clement Neo, Rauno Arike, Philip Torr, Fazl Barez

Large language models (LLMs) fine-tuned by reinforcement learning from human feedback (RLHF) are becoming more widely deployed.

Cannot find the paper you are looking for? You can Submit a new open access paper.