1 code implementation • 5 Dec 2023 • Justin Engelmann, Jamie Burke, Charlene Hamid, Megan Reid-Schachter, Dan Pugh, Neeraj Dhaun, Diana Moukaddem, Lyle Gray, Niall Strang, Paul McGraw, Amos Storkey, Paul J. Steptoe, Stuart King, Tom MacGillivray, Miguel O. Bernabeu, Ian J. C. MacCormick
We analysed segmentation agreement (AUC, Dice) and choroid metrics agreement (Pearson, Spearman, mean absolute error (MAE)) in internal and external test sets.
no code implementations • 15 Nov 2023 • Perry Gibson, José Cano, Elliot J. Crowley, Amos Storkey, Michael O'Boyle
Deep Neural Networks (DNNs) are extremely computationally demanding, which presents a large barrier to their deployment on resource-constrained devices.
no code implementations • 9 Oct 2023 • Trevor McInroe, Adam Jelley, Stefano V. Albrecht, Amos Storkey
Offline pretraining with a static dataset followed by online fine-tuning (offline-to-online, or OtO) is a paradigm well matched to a real-world RL deployment process.
no code implementations • 3 Oct 2023 • Thomas L. Lee, Amos Storkey
Motivated by an analysis of the linear case, we show that per-chunk weight averaging improves performance in the chunking setting and that this performance transfers to the full CL setting.
no code implementations • 29 Sep 2023 • Alessandro Fontanella, Wenwen Li, Grant Mair, Antreas Antoniou, Eleanor Platt, Paul Armitage, Emanuele Trucco, Joanna Wardlaw, Amos Storkey
DL methods can be designed for AIS lesion detection on CT using the vast quantities of routinely-collected CT brain scan data.
no code implementations • 26 Sep 2023 • Alessandro Fontanella, Wenwen Li, Grant Mair, Antreas Antoniou, Eleanor Platt, Chloe Martin, Paul Armitage, Emanuele Trucco, Joanna Wardlaw, Amos Storkey
Despite the large amount of brain CT data generated in clinical practice, the availability of CT datasets for deep learning (DL) research is currently limited.
no code implementations • 19 Aug 2023 • Asif Khan, Amos Storkey
The contrastive methods are popular choices for learning the representation of nodes in a graph.
1 code implementation • 3 Aug 2023 • Alessandro Fontanella, Grant Mair, Joanna Wardlaw, Emanuele Trucco, Amos Storkey
Segmentation masks of pathological areas are useful in many medical applications, such as brain tumour and stroke management.
1 code implementation • 25 Jul 2023 • Justin Engelmann, Amos Storkey, Miguel O. Bernabeu
For this task, we present a second model, QuickQual MEga Minified Estimator (QuickQual-MEME), that consists of only 10 parameters on top of an off-the-shelf Densenet121 and can distinguish between gradable and ungradable images with an accuracy of 89. 18% (AUC: 0. 9537).
no code implementations • 24 Jul 2023 • William Toner, Amos Storkey
Building upon this observation, we propose imposing a lower bound on the empirical risk during training to mitigate overfitting.
1 code implementation • 3 Jul 2023 • Jamie Burke, Justin Engelmann, Charlene Hamid, Megan Reid-Schachter, Tom Pearson, Dan Pugh, Neeraj Dhaun, Stuart King, Tom MacGillivray, Miguel O. Bernabeu, Amos Storkey, Ian J. C. MacCormick
Results: DeepGPET achieves excellent agreement with GPET on data from 3 clinical studies (AUC=0. 9994, Dice=0. 9664; Pearson correlation of 0. 8908 for choroidal thickness and 0. 9082 for choroidal area), while reducing the mean processing time per image on a standard laptop CPU from 34. 49s ($\pm$15. 09) using GPET to 1. 25s ($\pm$0. 10) using DeepGPET.
no code implementations • 30 May 2023 • Thomas L. Lee, Amos Storkey
DeepCCG works by updating the posterior of a class conditional Gaussian classifier such that the classifier adapts instantly to representation shift.
1 code implementation • 27 Mar 2023 • Alessandro Fontanella, Antreas Antoniou, Wenwen Li, Joanna Wardlaw, Grant Mair, Emanuele Trucco, Amos Storkey
We investigate the best way to generate the saliency maps employed in our architecture and propose a way to obtain them from adversarially generated counterfactual images.
1 code implementation • 30 Jan 2023 • Adam Jelley, Amos Storkey, Antreas Antoniou, Sam Devlin
We evaluate our approach on an adaptation of a comprehensive few-shot learning benchmark, Meta-Dataset, and demonstrate the benefits of POEM over other meta-learning methods at representation learning from partial observations.
1 code implementation • 8 Aug 2022 • Asif Khan, Amos Storkey
We propose robustness evaluation scores using the eigenspectrum of a pullback metric tensor.
no code implementations • 12 Jul 2022 • Justin Engelmann, Ana Villaplana-Velasco, Amos Storkey, Miguel O. Bernabeu
Thus, methods for calculating retinal traits tend to be complex, multi-step pipelines that can only be applied to high quality images.
1 code implementation • 5 Jul 2022 • Lukas Schäfer, Filippos Christianos, Amos Storkey, Stefano V. Albrecht
We show that a team of agents is able to adapt to novel tasks when provided with task embeddings.
Multi-agent Reinforcement Learning reinforcement-learning +1
1 code implementation • 11 Mar 2022 • Justin Engelmann, Alice D. McTrusty, Ian J. C. MacCormick, Emma Pead, Amos Storkey, Miguel O. Bernabeu
Previous studies showed that deep learning (DL) models are effective for detecting retinal disease in UWF images, but primarily considered individual diseases under less-than-realistic conditions (excluding images with other diseases, artefacts, comorbidities, or borderline cases; and balancing healthy and diseased images) and did not systematically investigate which regions of the UWF images are relevant for disease detection.
1 code implementation • 10 Mar 2022 • Chenhongyi Yang, Mateusz Ochal, Amos Storkey, Elliot J. Crowley
Based on this, we propose Prediction-Guided Distillation (PGD), which focuses distillation on these key predictive regions of the teacher and yields considerable gains in performance over many existing KD baselines.
1 code implementation • 29 Jan 2022 • Asif Khan, Alexander I. Cowen-Rivers, Antoine Grosnit, Derrick-Goh-Xin Deik, Philippe A. Robert, Victor Greiff, Eva Smorodina, Puneet Rawat, Kamil Dreczkowski, Rahmad Akbar, Rasul Tutunov, Dany Bou-Ammar, Jun Wang, Amos Storkey, Haitham Bou-Ammar
software suite as a black-box oracle to score the target specificity and affinity of designed antibodies \textit{in silico} in an unconstrained fashion~\citep{robert2021one}.
no code implementations • 17 Dec 2021 • Justin Engelmann, Amos Storkey, Miguel O. Bernabeu
We propose the pixel-wise aggregation of image-wise explanations as a simple method to obtain label-wise and overall global explanations.
1 code implementation • 2 Dec 2021 • Asif Khan, Amos Storkey
We introduce \textit{HALO} -- a deep generative model utilising HAmiltonian Latent Operators to reliably disentangle content and motion information in image sequences.
1 code implementation • 20 Jun 2021 • Benedict Leimkuhler, Tiffany Vlaar, Timothée Pouchon, Amos Storkey
We employ constraints to control the parameter space of deep neural networks throughout training.
1 code implementation • ICLR Workshop Learning_to_Learn 2021 • Mateusz Ochal, Massimiliano Patacchiola, Amos Storkey, Jose Vazquez, Sen Wang
Meta-Learning (ML) has proven to be a useful tool for training Few-Shot Learning (FSL) algorithms by exposure to batches of tasks sampled from a meta-dataset.
1 code implementation • 7 Jan 2021 • Mateusz Ochal, Massimiliano Patacchiola, Amos Storkey, Jose Vazquez, Sen Wang
Few-Shot Learning (FSL) algorithms are commonly trained through Meta-Learning (ML), which exposes models to batches of tasks sampled from a meta-dataset to mimic tasks seen during evaluation.
no code implementations • 1 Jan 2021 • Mateusz Ochal, Massimiliano Patacchiola, Jose Vazquez, Amos Storkey, Sen Wang
Few-shot learning aims to train models on a limited number of labeled samples from a support set in order to generalize to unseen samples from a query set.
no code implementations • 19 Nov 2020 • Luke Darlow, Stanisław Jastrzębski, Amos Storkey
By training neural networks on these adversarial examples, we can improve their generalisation in collider bias settings.
no code implementations • 28 Sep 2020 • Paul Micaelli, Amos Storkey
We demonstrate that the hyperparameters of this optimizer can be learned non-greedily without gradient degradation over $\sim 10^4$ inner gradient steps, by only requiring $\sim 10$ outer gradient steps.
1 code implementation • NeurIPS 2021 • Paul Micaelli, Amos Storkey
Gradient-based hyperparameter optimization has earned a widespread popularity in the context of few-shot meta-learning, but remains broadly impractical for tasks with long horizons (many gradient steps), due to memory scaling and gradient degradation issues.
no code implementations • 17 Jun 2020 • Benedict Leimkuhler, Timothée Pouchon, Tiffany Vlaar, Amos Storkey
We propose a method for efficiently incorporating constraints into a stochastic gradient Langevin framework for the training of deep neural networks.
1 code implementation • 17 Jun 2020 • Perry Gibson, José Cano, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey
We observe that our new implementation scales well with the number of groups and provides the best inference times in all settings, improving the existing implementations of grouped convolutions in TVM, PyTorch and TensorFlow Lite by 3. 4x, 8x and 4x on average respectively.
1 code implementation • NeurIPS 2020 • Massimiliano Patacchiola, Amos Storkey
In self-supervised learning, a system is tasked with achieving a surrogate objective by defining alternative targets on a set of unlabeled data.
2 code implementations • 8 Jun 2020 • Joseph Mellor, Jack Turner, Amos Storkey, Elliot J. Crowley
In this work, we examine the overlap of activations between datapoints in untrained networks and motivate how this can give a measure which is usefully indicative of a network's trained performance.
2 code implementations • 15 Apr 2020 • Antreas Antoniou, Massimiliano Patacchiola, Mateusz Ochal, Amos Storkey
Both few-shot and continual learning have seen substantial progress in the last years due to the introduction of proper benchmarks.
1 code implementation • 11 Apr 2020 • Timothy Hospedales, Antreas Antoniou, Paul Micaelli, Amos Storkey
We survey promising applications and successes of meta-learning such as few-shot learning and reinforcement learning.
no code implementations • 13 Mar 2020 • Luke Nicholas Darlow, Amos Storkey
We introduce deep hierarchical object grouping (DHOG) that computes a number of distinct discrete representations of images in a hierarchical order, eventually generating representations that better optimise the mutual information objective.
Ranked #18 on Image Clustering on CIFAR-10 (using extra training data)
no code implementations • ICLR 2019 • Luke Nicholas Darlow, Amos Storkey
The information bottleneck principle (Shwartz-Ziv & Tishby, 2017) suggests that SGD-based training of deep neural networks results in optimally compressed hidden layers, from an information theoretic perspective.
1 code implementation • ICLR Workshop DeepDiffEq 2019 • Stathi Fotiadis, Eduardo Pignatelli, Mario Lino Valencia, Chris Cantwell, Amos Storkey, Anil A. Bharath
Dynamical systems can be modelled by partial differential equations and numerical computations are used everywhere in science and engineering.
no code implementations • 20 Feb 2020 • Valentin Radu, Kuba Kaszyk, Yuan Wen, Jack Turner, Jose Cano, Elliot J. Crowley, Bjorn Franke, Amos Storkey, Michael O'Boyle
We evaluate higher level libraries, which analyze the input characteristics of a convolutional layer, based on which they produce optimized OpenCL (Arm Compute Library and TVM) and CUDA (cuDNN) code.
3 code implementations • NeurIPS 2020 • Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey
Recently, different machine learning methods have been introduced to tackle the challenging few-shot learning scenario that is, learning from a small labeled dataset related to a specific task.
2 code implementations • ICLR 2020 • Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey, Gavin Gray
The desire to map neural networks to varying-capacity devices has led to the development of a wealth of compression techniques, many of which involve replacing standard convolutional blocks in a large network with cheap alternative blocks.
1 code implementation • 3 Jun 2019 • Gavin Gray, Elliot J. Crowley, Amos Storkey
In response to the development of recent efficient dense layers, this paper shows that something as simple as replacing linear components in pointwise convolutions with structured linear decompositions also produces substantial gains in the efficiency/accuracy tradeoff.
1 code implementation • 24 May 2019 • Antreas Antoniou, Amos Storkey
In this paper, we propose a framework called Self-Critique and Adapt or SCA, which learns to learn a label-free loss function, parameterized as a neural network.
Ranked #23 on Few-Shot Image Classification on CUB 200 5-way 5-shot
7 code implementations • NeurIPS 2019 • Paul Micaelli, Amos Storkey
Finally, we also propose a metric to quantify the degree of belief matching between teacher and student in the vicinity of decision boundaries, and observe a significantly higher match between our zero-shot student and the teacher, than between a student distilled with real data and the teacher.
no code implementations • 26 Feb 2019 • Antreas Antoniou, Amos Storkey
The field of few-shot learning has been laboriously explored in the supervised setting, where per-class labels are available.
Data Augmentation Unsupervised Few-Shot Image Classification +1
no code implementations • 1 Nov 2018 • Antreas Antoniou, Agnieszka Słowik, Elliot J. Crowley, Amos Storkey
Despite their impressive performance in many tasks, deep neural networks often struggle at relational reasoning.
19 code implementations • ICLR 2019 • Yuri Burda, Harrison Edwards, Amos Storkey, Oleg Klimov
In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods.
no code implementations • 24 Oct 2018 • Jack Turner, Elliot J. Crowley, Valentin Radu, José Cano, Amos Storkey, Michael O'Boyle
The task of accelerating large neural networks on general purpose hardware has, in recent years, prompted the use of channel pruning to reduce network size.
9 code implementations • ICLR 2019 • Antreas Antoniou, Harrison Edwards, Amos Storkey
The field of few-shot learning has recently seen substantial advancements.
no code implementations • 20 Oct 2018 • Gavin Gray, Elliot Crowley, Amos Storkey
Typical recent neural network designs are primarily convolutional layers, but the tricks enabling structured efficient linear layers (SELLs) have not yet been adapted to the convolutional setting.
no code implementations • NIPS Workshop CDNNRIA 2018 • Elliot J. Crowley, Jack Turner, Amos Storkey, Michael O'Boyle
First, when time-constrained, it is better to train a simple, smaller network from scratch than prune a large network.
2 code implementations • 10 Oct 2018 • Elliot J. Crowley, Jack Turner, Amos Storkey, Michael O'Boyle
Structured pruning is a popular method for compressing a neural network: given a large trained network, one alternates between removing channel connections and fine-tuning; reducing the overall width of the network.
1 code implementation • 19 Sep 2018 • Jack Turner, José Cano, Valentin Radu, Elliot J. Crowley, Michael O'Boyle, Amos Storkey
Convolutional Neural Networks (CNNs) are extremely computationally demanding, presenting a large barrier to their deployment on resource-constrained devices.
4 code implementations • ICLR 2019 • Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, Alexei A. Efros
However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent.
Ranked #14 on Atari Games on Atari 2600 Montezuma's Revenge
1 code implementation • ICLR 2019 • Stanisław Jastrzębski, Zachary Kenton, Nicolas Ballas, Asja Fischer, Yoshua Bengio, Amos Storkey
When studying the SGD dynamics in relation to the sharpest directions in this initial phase, we find that the SGD step is large compared to the curvature and commonly fails to minimize the loss along the sharpest directions.
no code implementations • ICLR 2018 • Conor Durkan, Amos Storkey, Harrison Edwards
Such reasoning requires learning disentangled representations of data which are interpretable in isolation, but can also be combined in a new, unseen scenario.
no code implementations • ICLR 2018 • Stanisław Jastrzębski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, Amos Storkey
In particular we find that the ratio of learning rate to batch size is a key determinant of SGD dynamics and of the width of the final minima, and that higher values of the ratio lead to wider minima and often better generalization.
7 code implementations • ICLR 2018 • Antreas Antoniou, Amos Storkey, Harrison Edwards
The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and generalise it to generate other within-class data items.
1 code implementation • NeurIPS 2018 • Elliot J. Crowley, Gavin Gray, Amos Storkey
Many engineers wish to deploy modern neural networks in memory-limited settings; but the development of flexible methods for reducing memory use is in its infancy, and there is little knowledge of the resulting cost-benefit.
5 code implementations • 7 Jun 2016 • Harrison Edwards, Amos Storkey
We refer to our model as a neural statistician, and by this we mean a neural network that can learn to compute summary statistics of datasets without supervision.
1 code implementation • 18 Nov 2015 • Harrison Edwards, Amos Storkey
The flexibility of this method is shown via a novel problem: removing annotations from images, from unaligned training examples of annotated and unannotated images, and with no a priori knowledge of the form of annotation provided to the model.
1 code implementation • 10 Dec 2014 • Christopher Clark, Amos Storkey
Our final networks are able to achieve move prediction accuracies of 41. 1% and 44. 4% on two different Go datasets, surpassing previous state of the art on this task by significant margins.
no code implementations • 4 Mar 2014 • Jinli Hu, Amos Storkey
We present a new model for prediction markets, in which we use risk measures to model agents and introduce a market maker to describe the trading process.
no code implementations • 27 Sep 2013 • Peter Orchard, Felix Agakov, Amos Storkey
One of the fundamental tasks of science is to find explainable relationships between observed phenomena.