Search Results for author: Sebastian Farquhar

Found 28 papers, 13 papers with code

Challenges with unsupervised LLM knowledge discovery

no code implementations15 Dec 2023 Sebastian Farquhar, Vikrant Varma, Zachary Kenton, Johannes Gasteiger, Vladimir Mikulik, Rohin Shah

We show that existing unsupervised methods on large language model (LLM) activations do not discover knowledge -- instead they seem to discover whatever feature of the activations is most prominent.

Language Modelling Large Language Model

Prediction-Oriented Bayesian Active Learning

1 code implementation17 Apr 2023 Freddie Bickford Smith, Andreas Kirsch, Sebastian Farquhar, Yarin Gal, Adam Foster, Tom Rainforth

Information-theoretic approaches to active learning have traditionally focused on maximising the information gathered about the model parameters, most commonly by optimising the BALD score.

Active Learning

Tracr: Compiled Transformers as a Laboratory for Interpretability

1 code implementation NeurIPS 2023 David Lindner, János Kramár, Sebastian Farquhar, Matthew Rahtz, Thomas McGrath, Vladimir Mikulik

Additionally, the known structure of Tracr-compiled models can serve as ground-truth for evaluating interpretability methods.

Understanding Approximation for Bayesian Inference in Neural Networks

no code implementations11 Nov 2022 Sebastian Farquhar

To assess a model's ability to incorporate different parts of the Bayesian framework we can identify desirable characteristic behaviours of Bayesian reasoning and pick decision-problems that make heavy use of those behaviours.

Active Learning Bayesian Inference +1

Do Bayesian Neural Networks Need To Be Fully Stochastic?

2 code implementations11 Nov 2022 Mrinank Sharma, Sebastian Farquhar, Eric Nalisnick, Tom Rainforth

We investigate the benefit of treating all the parameters in a Bayesian neural network stochastically and find compelling theoretical and empirical evidence that this standard construction may be unnecessary.

Discovering Agents

no code implementations17 Aug 2022 Zachary Kenton, Ramana Kumar, Sebastian Farquhar, Jonathan Richens, Matt MacDermott, Tom Everitt

Causal models of agents have been used to analyse the safety aspects of machine learning systems.

Causal Discovery

Path-Specific Objectives for Safer Agent Incentives

no code implementations21 Apr 2022 Sebastian Farquhar, Ryan Carey, Tom Everitt

We then train agents to maximize the causal effect of actions on the expected return which is not mediated by the delicate parts of state, using Causal Influence Diagram analysis.

Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients

1 code implementation ICLR 2022 Milad Alizadeh, Shyam A. Tailor, Luisa M Zintgraf, Joost van Amersfoort, Sebastian Farquhar, Nicholas Donald Lane, Yarin Gal

Pruning neural networks at initialization would enable us to find sparse models that retain the accuracy of the original network while consuming fewer computational resources for training and inference.

Stochastic Batch Acquisition: A Simple Baseline for Deep Active Learning

2 code implementations22 Jun 2021 Andreas Kirsch, Sebastian Farquhar, Parmida Atighehchian, Andrew Jesson, Frederic Branchaud-Charron, Yarin Gal

We examine a simple stochastic strategy for adapting well-known single-point acquisition functions to allow batch active learning.

Active Learning

Active Testing: Sample-Efficient Model Evaluation

1 code implementation9 Mar 2021 Jannik Kossen, Sebastian Farquhar, Yarin Gal, Tom Rainforth

While approaches like active learning reduce the number of labels needed for model training, existing literature largely ignores the cost of labeling test data, typically unrealistically assuming large test sets for model evaluation.

Active Learning Gaussian Processes

On Statistical Bias In Active Learning: How and When To Fix It

no code implementations ICLR 2021 Sebastian Farquhar, Yarin Gal, Tom Rainforth

Active learning is a powerful tool when labelling data is expensive, but it introduces a bias because the training data no longer follows the population distribution.

Active Learning

Single Shot Structured Pruning Before Training

no code implementations1 Jul 2020 Joost van Amersfoort, Milad Alizadeh, Sebastian Farquhar, Nicholas Lane, Yarin Gal

We introduce a method to speed up training by 2x and inference by 3x in deep neural networks using structured pruning applied before training.

Liberty or Depth: Deep Bayesian Neural Nets Do Not Need Complex Weight Posterior Approximations

no code implementations NeurIPS 2020 Sebastian Farquhar, Lewis Smith, Yarin Gal

We challenge the longstanding assumption that the mean-field approximation for variational inference in Bayesian neural networks is severely restrictive, and show this is not the case in deep networks.

Variational Inference

A Systematic Comparison of Bayesian Deep Learning Robustness in Diabetic Retinopathy Tasks

1 code implementation22 Dec 2019 Angelos Filos, Sebastian Farquhar, Aidan N. Gomez, Tim G. J. Rudner, Zachary Kenton, Lewis Smith, Milad Alizadeh, Arnoud de Kroon, Yarin Gal

From our comparison we conclude that some current techniques which solve benchmarks such as UCI `overfit' their uncertainty to the dataset---when evaluated on our benchmark these underperform in comparison to simpler baselines.

Out-of-Distribution Detection

Radial Bayesian Neural Networks: Beyond Discrete Support In Large-Scale Bayesian Deep Learning

4 code implementations1 Jul 2019 Sebastian Farquhar, Michael Osborne, Yarin Gal

The Radial BNN is motivated by avoiding a sampling problem in 'mean-field' variational inference (MFVI) caused by the so-called 'soap-bubble' pathology of multivariate Gaussians.

Continual Learning Variational Inference

A Unifying Bayesian View of Continual Learning

2 code implementations18 Feb 2019 Sebastian Farquhar, Yarin Gal

From a Bayesian perspective, continual learning seems straightforward: Given the model posterior one would simply use this as the prior for the next task.

Continual Learning

Differentially Private Continual Learning

no code implementations18 Feb 2019 Sebastian Farquhar, Yarin Gal

Catastrophic forgetting can be a significant problem for institutions that must delete historic data for privacy reasons.

Continual Learning Variational Inference

Towards Robust Evaluations of Continual Learning

no code implementations24 May 2018 Sebastian Farquhar, Yarin Gal

Experiments used in current continual learning research do not faithfully assess fundamental challenges of learning continually.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.