Search Results for author: Adam X. Yang

Found 5 papers, 1 papers with code

Bayesian Reward Models for LLM Alignment

no code implementations20 Feb 2024 Adam X. Yang, Maxime Robeyns, Thomas Coste, Jun Wang, Haitham Bou-Ammar, Laurence Aitchison

To ensure that large language model (LLM) responses are helpful and non-toxic, we usually fine-tune a reward model on human preference data.

Language Modelling Large Language Model

Bayesian Low-rank Adaptation for Large Language Models

2 code implementations24 Aug 2023 Adam X. Yang, Maxime Robeyns, Xi Wang, Laurence Aitchison

Low-rank adaptation (LoRA) has emerged as a new paradigm for cost-efficient fine-tuning of large language models (LLMs).

MONGOOSE: Path-wise Smooth Bayesian Optimisation via Meta-learning

no code implementations22 Feb 2023 Adam X. Yang, Laurence Aitchison, Henry B. Moss

In Bayesian optimisation, we often seek to minimise the black-box objective functions that arise in real-world physical systems.

Bayesian Optimisation Meta-Learning

A theory of representation learning gives a deep generalisation of kernel methods

no code implementations30 Aug 2021 Adam X. Yang, Maxime Robeyns, Edward Milsom, Ben Anson, Nandi Schoots, Laurence Aitchison

In particular, we show that Deep Gaussian processes (DGPs) in the Bayesian representation learning limit have exactly multivariate Gaussian posteriors, and the posterior covariances can be obtained by optimizing an interpretable objective combining a log-likelihood to improve performance with a series of KL-divergences which keep the posteriors close to the prior.

Bayesian Inference Gaussian Processes +1

Deep kernel processes

no code implementations4 Oct 2020 Laurence Aitchison, Adam X. Yang, Sebastian W. Ober

We show that the deep inverse Wishart process gives superior performance to DGPs and infinite BNNs on standard fully-connected baselines.

Gaussian Processes Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.