no code implementations • 4 Apr 2024 • Angus Nicolson, Lisa Schut, J. Alison Noble, Yarin Gal
We introduce tools designed to detect the presence of these properties, provide insight into how they affect the derived explanations, and provide recommendations to minimise their impact.
no code implementations • 25 Oct 2023 • Lisa Schut, Nenad Tomasev, Tom McGrath, Demis Hassabis, Ulrich Paquet, Been Kim
Artificial Intelligence (AI) systems have made remarkable progress, attaining super-human performance across various domains.
no code implementations • 17 Aug 2023 • Tom Zahavy, Vivek Veeriah, Shaobo Hou, Kevin Waugh, Matthew Lai, Edouard Leurent, Nenad Tomasev, Lisa Schut, Demis Hassabis, Satinder Singh
In particular, we investigate whether a team of diverse AI systems can outperform a single AI in challenging tasks by generating more ideas as a group and then selecting the best ones.
1 code implementation • 29 Nov 2021 • Benedikt Höltgen, Lisa Schut, Jan M. Brauner, Yarin Gal
This is the aim of algorithms generating counterfactual explanations.
1 code implementation • 16 Mar 2021 • Lisa Schut, Oscar Key, Rory McGrath, Luca Costabello, Bogdan Sacaleanu, Medb Corcoran, Yarin Gal
Counterfactual explanations (CEs) are a practical tool for demonstrating why machine learning classifiers make particular decisions.
no code implementations • NeurIPS 2020 • Clare Lyle, Lisa Schut, Binxin Ru, Yarin Gal, Mark van der Wilk
This provides two major insights: first, that a measure of a model's training speed can be used to estimate its marginal likelihood.
no code implementations • 28 Sep 2020 • Binxin Ru, Clare Lyle, Lisa Schut, Mark van der Wilk, Yarin Gal
Reliable yet efficient evaluation of generalisation performance of a proposed architecture is crucial to the success of neural architecture search (NAS).
2 code implementations • NeurIPS 2021 • Binxin Ru, Clare Lyle, Lisa Schut, Miroslav Fil, Mark van der Wilk, Yarin Gal
Reliable yet efficient evaluation of generalisation performance of a proposed architecture is crucial to the success of neural architecture search (NAS).
no code implementations • 7 Apr 2020 • Lewis Smith, Lisa Schut, Yarin Gal, Mark van der Wilk
'Capsule' models try to explicitly represent the poses of objects, enforcing a linear relationship between an object's pose and that of its constituent parts.
no code implementations • 25 Sep 2019 • Lisa Schut, Yarin Gal
Adversarial perturbations cause a shift in the salient features of an image, which may result in a misclassification.