Search Results for author: Amos Storkey

Found 64 papers, 38 papers with code

DLAS: An Exploration and Assessment of the Deep Learning Acceleration Stack

no code implementations15 Nov 2023 Perry Gibson, José Cano, Elliot J. Crowley, Amos Storkey, Michael O'Boyle

Deep Neural Networks (DNNs) are extremely computationally demanding, which presents a large barrier to their deployment on resource-constrained devices.

Code Generation

Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement Learning

no code implementations9 Oct 2023 Trevor McInroe, Adam Jelley, Stefano V. Albrecht, Amos Storkey

Offline pretraining with a static dataset followed by online fine-tuning (offline-to-online, or OtO) is a paradigm well matched to a real-world RL deployment process.

Offline RL

Chunking: Forgetting Matters in Continual Learning even without Changing Tasks

no code implementations3 Oct 2023 Thomas L. Lee, Amos Storkey

Motivated by an analysis of the linear case, we show that per-chunk weight averaging improves performance in the chunking setting and that this performance transfers to the full CL setting.

Chunking Continual Learning

Challenges of building medical image datasets for development of deep learning software in stroke

no code implementations26 Sep 2023 Alessandro Fontanella, Wenwen Li, Grant Mair, Antreas Antoniou, Eleanor Platt, Chloe Martin, Paul Armitage, Emanuele Trucco, Joanna Wardlaw, Amos Storkey

Despite the large amount of brain CT data generated in clinical practice, the availability of CT datasets for deep learning (DL) research is currently limited.

Image Cropping

Diffusion Models for Counterfactual Generation and Anomaly Detection in Brain Images

1 code implementation3 Aug 2023 Alessandro Fontanella, Grant Mair, Joanna Wardlaw, Emanuele Trucco, Amos Storkey

Segmentation masks of pathological areas are useful in many medical applications, such as brain tumour and stroke management.

Anatomy Anomaly Detection +4

QuickQual: Lightweight, convenient retinal image quality scoring with off-the-shelf pretrained models

1 code implementation25 Jul 2023 Justin Engelmann, Amos Storkey, Miguel O. Bernabeu

For this task, we present a second model, QuickQual MEga Minified Estimator (QuickQual-MEME), that consists of only 10 parameters on top of an off-the-shelf Densenet121 and can distinguish between gradable and ungradable images with an accuracy of 89. 18% (AUC: 0. 9537).

Label Noise: Correcting a Correction

no code implementations24 Jul 2023 William Toner, Amos Storkey

Building upon this observation, we propose imposing a lower bound on the empirical risk during training to mitigate overfitting.

An open-source deep learning algorithm for efficient and fully-automatic analysis of the choroid in optical coherence tomography

1 code implementation3 Jul 2023 Jamie Burke, Justin Engelmann, Charlene Hamid, Megan Reid-Schachter, Tom Pearson, Dan Pugh, Neeraj Dhaun, Stuart King, Tom MacGillivray, Miguel O. Bernabeu, Amos Storkey, Ian J. C. MacCormick

Results: DeepGPET achieves excellent agreement with GPET on data from 3 clinical studies (AUC=0. 9994, Dice=0. 9664; Pearson correlation of 0. 8908 for choroidal thickness and 0. 9082 for choroidal area), while reducing the mean processing time per image on a standard laptop CPU from 34. 49s ($\pm$15. 09) using GPET to 1. 25s ($\pm$0. 10) using DeepGPET.

Segmentation

Class Conditional Gaussians for Continual Learning

no code implementations30 May 2023 Thomas L. Lee, Amos Storkey

DeepCCG works by updating the posterior of a class conditional Gaussian classifier such that the classifier adapts instantly to representation shift.

Continual Learning

ACAT: Adversarial Counterfactual Attention for Classification and Detection in Medical Imaging

1 code implementation27 Mar 2023 Alessandro Fontanella, Antreas Antoniou, Wenwen Li, Joanna Wardlaw, Grant Mair, Emanuele Trucco, Amos Storkey

We investigate the best way to generate the saliency maps employed in our architecture and propose a way to obtain them from adversarially generated counterfactual images.

counterfactual

Contrastive Meta-Learning for Partially Observable Few-Shot Learning

1 code implementation30 Jan 2023 Adam Jelley, Amos Storkey, Antreas Antoniou, Sam Devlin

We evaluate our approach on an adaptation of a comprehensive few-shot learning benchmark, Meta-Dataset, and demonstrate the benefits of POEM over other meta-learning methods at representation learning from partial observations.

Few-Shot Learning Representation Learning

Adversarial robustness of VAEs through the lens of local geometry

1 code implementation8 Aug 2022 Asif Khan, Amos Storkey

We propose robustness evaluation scores using the eigenspectrum of a pullback metric tensor.

Adversarial Robustness

Robust and efficient computation of retinal fractal dimension through deep approximation

no code implementations12 Jul 2022 Justin Engelmann, Ana Villaplana-Velasco, Amos Storkey, Miguel O. Bernabeu

Thus, methods for calculating retinal traits tend to be complex, multi-step pipelines that can only be applied to high quality images.

Detection of multiple retinal diseases in ultra-widefield fundus images using deep learning: data-driven identification of relevant regions

1 code implementation11 Mar 2022 Justin Engelmann, Alice D. McTrusty, Ian J. C. MacCormick, Emma Pead, Amos Storkey, Miguel O. Bernabeu

Previous studies showed that deep learning (DL) models are effective for detecting retinal disease in UWF images, but primarily considered individual diseases under less-than-realistic conditions (excluding images with other diseases, artefacts, comorbidities, or borderline cases; and balancing healthy and diseased images) and did not systematically investigate which regions of the UWF images are relevant for disease detection.

Prediction-Guided Distillation for Dense Object Detection

1 code implementation10 Mar 2022 Chenhongyi Yang, Mateusz Ochal, Amos Storkey, Elliot J. Crowley

Based on this, we propose Prediction-Guided Distillation (PGD), which focuses distillation on these key predictive regions of the teacher and yields considerable gains in performance over many existing KD baselines.

Dense Object Detection Knowledge Distillation +2

Global explainability in aligned image modalities

no code implementations17 Dec 2021 Justin Engelmann, Amos Storkey, Miguel O. Bernabeu

We propose the pixel-wise aggregation of image-wise explanations as a simple method to obtain label-wise and overall global explanations.

Position

Hamiltonian latent operators for content and motion disentanglement in image sequences

1 code implementation2 Dec 2021 Asif Khan, Amos Storkey

We introduce \textit{HALO} -- a deep generative model utilising HAmiltonian Latent Operators to reliably disentangle content and motion information in image sequences.

Disentanglement Motion Disentanglement

Better Training using Weight-Constrained Stochastic Dynamics

1 code implementation20 Jun 2021 Benedict Leimkuhler, Tiffany Vlaar, Timothée Pouchon, Amos Storkey

We employ constraints to control the parameter space of deep neural networks throughout training.

How Sensitive are Meta-Learners to Dataset Imbalance?

1 code implementation ICLR Workshop Learning_to_Learn 2021 Mateusz Ochal, Massimiliano Patacchiola, Amos Storkey, Jose Vazquez, Sen Wang

Meta-Learning (ML) has proven to be a useful tool for training Few-Shot Learning (FSL) algorithms by exposure to batches of tasks sampled from a meta-dataset.

Few-Shot Learning

Few-Shot Learning with Class Imbalance

1 code implementation7 Jan 2021 Mateusz Ochal, Massimiliano Patacchiola, Amos Storkey, Jose Vazquez, Sen Wang

Few-Shot Learning (FSL) algorithms are commonly trained through Meta-Learning (ML), which exposes models to batches of tasks sampled from a meta-dataset to mimic tasks seen during evaluation.

Few-Shot Learning

Class Imbalance in Few-Shot Learning

no code implementations1 Jan 2021 Mateusz Ochal, Massimiliano Patacchiola, Jose Vazquez, Amos Storkey, Sen Wang

Few-shot learning aims to train models on a limited number of labeled samples from a support set in order to generalize to unseen samples from a query set.

Few-Shot Learning

Latent Adversarial Debiasing: Mitigating Collider Bias in Deep Neural Networks

no code implementations19 Nov 2020 Luke Darlow, Stanisław Jastrzębski, Amos Storkey

By training neural networks on these adversarial examples, we can improve their generalisation in collider bias settings.

Selection bias

Non-greedy Gradient-based Hyperparameter Optimization Over Long Horizons

no code implementations28 Sep 2020 Paul Micaelli, Amos Storkey

We demonstrate that the hyperparameters of this optimizer can be learned non-greedily without gradient degradation over $\sim 10^4$ inner gradient steps, by only requiring $\sim 10$ outer gradient steps.

Few-Shot Learning Hyperparameter Optimization

Gradient-based Hyperparameter Optimization Over Long Horizons

1 code implementation NeurIPS 2021 Paul Micaelli, Amos Storkey

Gradient-based hyperparameter optimization has earned a widespread popularity in the context of few-shot meta-learning, but remains broadly impractical for tasks with long horizons (many gradient steps), due to memory scaling and gradient degradation issues.

Hyperparameter Optimization Meta-Learning

Constraint-Based Regularization of Neural Networks

no code implementations17 Jun 2020 Benedict Leimkuhler, Timothée Pouchon, Tiffany Vlaar, Amos Storkey

We propose a method for efficiently incorporating constraints into a stochastic gradient Langevin framework for the training of deep neural networks.

Image Classification

Optimizing Grouped Convolutions on Edge Devices

1 code implementation17 Jun 2020 Perry Gibson, José Cano, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey

We observe that our new implementation scales well with the number of groups and provides the best inference times in all settings, improving the existing implementations of grouped convolutions in TVM, PyTorch and TensorFlow Lite by 3. 4x, 8x and 4x on average respectively.

Self-Supervised Relational Reasoning for Representation Learning

1 code implementation NeurIPS 2020 Massimiliano Patacchiola, Amos Storkey

In self-supervised learning, a system is tasked with achieving a surrogate objective by defining alternative targets on a set of unlabeled data.

Descriptive Image Retrieval +4

Neural Architecture Search without Training

2 code implementations8 Jun 2020 Joseph Mellor, Jack Turner, Amos Storkey, Elliot J. Crowley

In this work, we examine the overlap of activations between datapoints in untrained networks and motivate how this can give a measure which is usefully indicative of a network's trained performance.

Neural Architecture Search

Defining Benchmarks for Continual Few-Shot Learning

2 code implementations15 Apr 2020 Antreas Antoniou, Massimiliano Patacchiola, Mateusz Ochal, Amos Storkey

Both few-shot and continual learning have seen substantial progress in the last years due to the introduction of proper benchmarks.

continual few-shot learning Continual Learning +1

Meta-Learning in Neural Networks: A Survey

1 code implementation11 Apr 2020 Timothy Hospedales, Antreas Antoniou, Paul Micaelli, Amos Storkey

We survey promising applications and successes of meta-learning such as few-shot learning and reinforcement learning.

Few-Shot Learning Hyperparameter Optimization +1

DHOG: Deep Hierarchical Object Grouping

no code implementations13 Mar 2020 Luke Nicholas Darlow, Amos Storkey

We introduce deep hierarchical object grouping (DHOG) that computes a number of distinct discrete representations of images in a hierarchical order, eventually generating representations that better optimise the mutual information objective.

Ranked #18 on Image Clustering on CIFAR-10 (using extra training data)

Clustering Edge Detection +3

What Information Does a ResNet Compress?

no code implementations ICLR 2019 Luke Nicholas Darlow, Amos Storkey

The information bottleneck principle (Shwartz-Ziv & Tishby, 2017) suggests that SGD-based training of deep neural networks results in optimally compressed hidden layers, from an information theoretic perspective.

Comparing recurrent and convolutional neural networks for predicting wave propagation

1 code implementation ICLR Workshop DeepDiffEq 2019 Stathi Fotiadis, Eduardo Pignatelli, Mario Lino Valencia, Chris Cantwell, Amos Storkey, Anil A. Bharath

Dynamical systems can be modelled by partial differential equations and numerical computations are used everywhere in science and engineering.

Performance Aware Convolutional Neural Network Channel Pruning for Embedded GPUs

no code implementations20 Feb 2020 Valentin Radu, Kuba Kaszyk, Yuan Wen, Jack Turner, Jose Cano, Elliot J. Crowley, Bjorn Franke, Amos Storkey, Michael O'Boyle

We evaluate higher level libraries, which analyze the input characteristics of a convolutional layer, based on which they produce optimized OpenCL (Arm Compute Library and TVM) and CUDA (cuDNN) code.

Model Compression Network Pruning

Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels

3 code implementations NeurIPS 2020 Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey

Recently, different machine learning methods have been introduced to tackle the challenging few-shot learning scenario that is, learning from a small labeled dataset related to a specific task.

Bayesian Inference Domain Adaptation +4

BlockSwap: Fisher-guided Block Substitution for Network Compression on a Budget

2 code implementations ICLR 2020 Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey, Gavin Gray

The desire to map neural networks to varying-capacity devices has led to the development of a wealth of compression techniques, many of which involve replacing standard convolutional blocks in a large network with cheap alternative blocks.

Separable Layers Enable Structured Efficient Linear Substitutions

1 code implementation3 Jun 2019 Gavin Gray, Elliot J. Crowley, Amos Storkey

In response to the development of recent efficient dense layers, this paper shows that something as simple as replacing linear components in pointwise convolutions with structured linear decompositions also produces substantial gains in the efficiency/accuracy tradeoff.

Learning to learn via Self-Critique

1 code implementation24 May 2019 Antreas Antoniou, Amos Storkey

In this paper, we propose a framework called Self-Critique and Adapt or SCA, which learns to learn a label-free loss function, parameterized as a neural network.

Few-Shot Image Classification Few-Shot Learning +1

Zero-shot Knowledge Transfer via Adversarial Belief Matching

7 code implementations NeurIPS 2019 Paul Micaelli, Amos Storkey

Finally, we also propose a metric to quantify the degree of belief matching between teacher and student in the vicinity of decision boundaries, and observe a significantly higher match between our zero-shot student and the teacher, than between a student distilled with real data and the teacher.

Transfer Learning

Dilated DenseNets for Relational Reasoning

no code implementations1 Nov 2018 Antreas Antoniou, Agnieszka Słowik, Elliot J. Crowley, Amos Storkey

Despite their impressive performance in many tasks, deep neural networks often struggle at relational reasoning.

Relational Reasoning

Exploration by Random Network Distillation

19 code implementations ICLR 2019 Yuri Burda, Harrison Edwards, Amos Storkey, Oleg Klimov

In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods.

Montezuma's Revenge reinforcement-learning +2

Distilling with Performance Enhanced Students

no code implementations24 Oct 2018 Jack Turner, Elliot J. Crowley, Valentin Radu, José Cano, Amos Storkey, Michael O'Boyle

The task of accelerating large neural networks on general purpose hardware has, in recent years, prompted the use of channel pruning to reduce network size.

Model Compression

Training Structured Efficient Convolutional Layers

no code implementations20 Oct 2018 Gavin Gray, Elliot Crowley, Amos Storkey

Typical recent neural network designs are primarily convolutional layers, but the tricks enabling structured efficient linear layers (SELLs) have not yet been adapted to the convolutional setting.

Computational Efficiency

Pruning neural networks: is it time to nip it in the bud?

no code implementations NIPS Workshop CDNNRIA 2018 Elliot J. Crowley, Jack Turner, Amos Storkey, Michael O'Boyle

First, when time-constrained, it is better to train a simple, smaller network from scratch than prune a large network.

A Closer Look at Structured Pruning for Neural Network Compression

2 code implementations10 Oct 2018 Elliot J. Crowley, Jack Turner, Amos Storkey, Michael O'Boyle

Structured pruning is a popular method for compressing a neural network: given a large trained network, one alternates between removing channel connections and fine-tuning; reducing the overall width of the network.

Network Pruning Neural Network Compression

Characterising Across-Stack Optimisations for Deep Convolutional Neural Networks

1 code implementation19 Sep 2018 Jack Turner, José Cano, Valentin Radu, Elliot J. Crowley, Michael O'Boyle, Amos Storkey

Convolutional Neural Networks (CNNs) are extremely computationally demanding, presenting a large barrier to their deployment on resource-constrained devices.

Neural Network Compression

Large-Scale Study of Curiosity-Driven Learning

4 code implementations ICLR 2019 Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, Alexei A. Efros

However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent.

Atari Games SNES Games

On the Relation Between the Sharpest Directions of DNN Loss and the SGD Step Length

1 code implementation ICLR 2019 Stanisław Jastrzębski, Zachary Kenton, Nicolas Ballas, Asja Fischer, Yoshua Bengio, Amos Storkey

When studying the SGD dynamics in relation to the sharpest directions in this initial phase, we find that the SGD step is large compared to the curvature and commonly fails to minimize the loss along the sharpest directions.

Relation

The Context-Aware Learner

no code implementations ICLR 2018 Conor Durkan, Amos Storkey, Harrison Edwards

Such reasoning requires learning disentangled representations of data which are interpretable in isolation, but can also be combined in a new, unseen scenario.

Meta-Learning

Three Factors Influencing Minima in SGD

no code implementations ICLR 2018 Stanisław Jastrzębski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, Amos Storkey

In particular we find that the ratio of learning rate to batch size is a key determinant of SGD dynamics and of the width of the final minima, and that higher values of the ratio lead to wider minima and often better generalization.

Memorization Open-Ended Question Answering

Data Augmentation Generative Adversarial Networks

7 code implementations ICLR 2018 Antreas Antoniou, Amos Storkey, Harrison Edwards

The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and generalise it to generate other within-class data items.

Data Augmentation Few-Shot Learning +1

Moonshine: Distilling with Cheap Convolutions

1 code implementation NeurIPS 2018 Elliot J. Crowley, Gavin Gray, Amos Storkey

Many engineers wish to deploy modern neural networks in memory-limited settings; but the development of flexible methods for reducing memory use is in its infancy, and there is little knowledge of the resulting cost-benefit.

Towards a Neural Statistician

5 code implementations7 Jun 2016 Harrison Edwards, Amos Storkey

We refer to our model as a neural statistician, and by this we mean a neural network that can learn to compute summary statistics of datasets without supervision.

Clustering Few-Shot Image Classification

Censoring Representations with an Adversary

1 code implementation18 Nov 2015 Harrison Edwards, Amos Storkey

The flexibility of this method is shown via a novel problem: removing annotations from images, from unaligned training examples of annotated and unannotated images, and with no a priori knowledge of the form of annotation provided to the model.

Fairness

Teaching Deep Convolutional Neural Networks to Play Go

1 code implementation10 Dec 2014 Christopher Clark, Amos Storkey

Our final networks are able to achieve move prediction accuracies of 41. 1% and 44. 4% on two different Go datasets, surpassing previous state of the art on this task by significant margins.

Game of Go

Multi-period Trading Prediction Markets with Connections to Machine Learning

no code implementations4 Mar 2014 Jinli Hu, Amos Storkey

We present a new model for prediction markets, in which we use risk measures to model agents and introduce a market maker to describe the trading process.

BIG-bench Machine Learning

Bayesian Inference in Sparse Gaussian Graphical Models

no code implementations27 Sep 2013 Peter Orchard, Felix Agakov, Amos Storkey

One of the fundamental tasks of science is to find explainable relationships between observed phenomena.

Bayesian Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.