Search Results for author: Henry Gouk

Found 27 papers, 10 papers with code

Is Scaling Learned Optimizers Worth It? Evaluating The Value of VeLO's 4000 TPU Months

no code implementations27 Oct 2023 Fady Rezk, Antreas Antoniou, Henry Gouk, Timothy Hospedales

We analyze VeLO (versatile learned optimizer), the largest scale attempt to train a general purpose "foundational" optimizer to date.

Meta Omnium: A Benchmark for General-Purpose Learning-to-Learn

1 code implementation CVPR 2023 Ondrej Bohdal, Yinbing Tian, Yongshuo Zong, Ruchika Chavhan, Da Li, Henry Gouk, Li Guo, Timothy Hospedales

Meta-learning and other approaches to few-shot learning are widely studied for image recognition, and are increasingly applied to other vision tasks such as pose estimation and dense prediction.

Few-Shot Learning Pose Estimation +1

Effectiveness of Debiasing Techniques: An Indigenous Qualitative Analysis

no code implementations17 Apr 2023 Vithya Yogarajan, Gillian Dobbie, Henry Gouk

An indigenous perspective on the effectiveness of debiasing techniques for pre-trained language models (PLMs) is presented in this paper.

Amortised Invariance Learning for Contrastive Self-Supervision

1 code implementation24 Feb 2023 Ruchika Chavhan, Henry Gouk, Jan Stuehmer, Calum Heggan, Mehrdad Yaghoobi, Timothy Hospedales

Contrastive self-supervised learning methods famously produce high quality transferable representations by learning invariances to different data augmentations.

Contrastive Learning Representation Learning +1

Quality Diversity for Visual Pre-Training

no code implementations ICCV 2023 Ruchika Chavhan, Henry Gouk, Da Li, Timothy Hospedales

Notably, the augmentations used in both supervised and self-supervised training lead to features with high invariance to spatial and appearance transformations.

Inductive Bias Transfer Learning

Attacking Adversarial Defences by Smoothing the Loss Landscape

1 code implementation1 Aug 2022 Panagiotis Eustratiadis, Henry Gouk, Da Li, Timothy Hospedales

This paper investigates a family of methods for defending against adversarial attacks that owe part of their success to creating a noisy, discontinuous, or otherwise rugged loss landscape that adversaries find difficult to navigate.

Navigate

HyperInvariances: Amortizing Invariance Learning

no code implementations17 Jul 2022 Ruchika Chavhan, Henry Gouk, Jan Stühmer, Timothy Hospedales

Providing invariances in a given learning task conveys a key inductive bias that can lead to sample-efficient learning and good generalisation, if correctly specified.

Inductive Bias

Meta Mirror Descent: Optimiser Learning for Fast Convergence

no code implementations5 Mar 2022 Boyan Gao, Henry Gouk, Hae Beom Lee, Timothy M. Hospedales

The resulting framework, termed Meta Mirror Descent (MetaMD), learns to accelerate optimisation speed.

Meta-Learning

Finding lost DG: Explaining domain generalization via model complexity

no code implementations1 Feb 2022 Da Li, Henry Gouk, Timothy Hospedales

However much of the work in general purpose DG is heuristically motivated, as the DG problem is hard to model formally; and recent evaluations have cast doubt on existing methods' practical efficacy -- in particular compared to a well tuned empirical risk minimisation baseline.

Domain Generalization

Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks

1 code implementation22 Nov 2021 Linus Ericsson, Henry Gouk, Timothy M. Hospedales

We show that learned invariances strongly affect downstream task performance and confirm that different downstream tasks benefit from polar opposite (in)variances, leading to performance loss when the standard augmentation strategy is used.

Data Augmentation Representation Learning +1

Self-Supervised Representation Learning: Introduction, Advances and Challenges

no code implementations18 Oct 2021 Linus Ericsson, Henry Gouk, Chen Change Loy, Timothy M. Hospedales

Self-supervised representation learning methods aim to provide powerful deep feature learning without the requirement of large annotated datasets, thus alleviating the annotation bottleneck that is one of the main barriers to practical deployment of deep learning today.

Representation Learning

Active Altruism Learning and Information Sufficiency for Autonomous Driving

no code implementations9 Oct 2021 Jack Geary, Henry Gouk, Subramanian Ramamoorthy

Safe interaction between vehicles requires the ability to choose actions that reveal the preferences of the other vehicles.

Active Learning Autonomous Driving

Loss Function Learning for Domain Generalization by Implicit Gradient

no code implementations29 Sep 2021 Boyan Gao, Henry Gouk, Yongxin Yang, Timothy Hospedales

We take a different approach, and explore the impact of the ERM loss function on out-of-domain generalisation.

Domain Generalization Meta-Learning

Searching for Robustness: Loss Learning for Noisy Classification Tasks

no code implementations ICCV 2021 Boyan Gao, Henry Gouk, Timothy M. Hospedales

We present a "learning to learn" approach for automatically constructing white-box classification loss functions that are robust to label noise in the training data.

Classification General Classification

Shallow Bayesian Meta Learning for Real-World Few-Shot Recognition

2 code implementations ICCV 2021 Xueting Zhang, Debin Meng, Henry Gouk, Timothy Hospedales

Current state-of-the-art few-shot learners focus on developing effective training procedures for feature representations, before using simple, e. g. nearest centroid, classifiers.

cross-domain few-shot learning Few-Shot Image Classification

How Well Do Self-Supervised Models Transfer?

1 code implementation CVPR 2021 Linus Ericsson, Henry Gouk, Timothy M. Hospedales

We evaluate the transfer performance of 13 top self-supervised models on 40 downstream tasks, including many-shot and few-shot recognition, object detection, and dense prediction.

Classifier calibration Few-Shot Learning +7

Weight-Covariance Alignment for Adversarially Robust Neural Networks

1 code implementation17 Oct 2020 Panagiotis Eustratiadis, Henry Gouk, Da Li, Timothy Hospedales

Stochastic Neural Networks (SNNs) that inject noise into their hidden layers have recently been shown to achieve strong robustness against adversarial attacks.

Adversarial Robustness

Don't Wait, Just Weight: Improving Unsupervised Representations by Learning Goal-Driven Instance Weights

no code implementations22 Jun 2020 Linus Ericsson, Henry Gouk, Timothy M. Hospedales

We show that by learning Bayesian instance weights for the unlabelled data, we can improve the downstream classification accuracy by prioritising the most useful instances.

Meta-Learning Self-Supervised Learning

Distance-Based Regularisation of Deep Networks for Fine-Tuning

1 code implementation ICLR 2021 Henry Gouk, Timothy M. Hospedales, Massimiliano Pontil

Our bound is highly relevant for fine-tuning, because providing a network with a good initialisation based on transfer learning means that learning can modify the weights less, and hence achieve tighter generalisation.

Transfer Learning

Deep clustering with concrete k-means

no code implementations17 Oct 2019 Boyan Gao, Yongxin Yang, Henry Gouk, Timothy M. Hospedales

We address the problem of simultaneously learning a k-means clustering and deep feature representation from unlabelled data, which is of interest due to the potential of deep k-means to outperform traditional two-step feature extraction and shallow-clustering strategies.

Clustering Deep Clustering

Stochastic Gradient Trees

1 code implementation23 Jan 2019 Henry Gouk, Bernhard Pfahringer, Eibe Frank

We present an algorithm for learning decision trees using stochastic gradient information as the source of supervision.

Classification General Classification +3

MaxGain: Regularisation of Neural Networks by Constraining Activation Magnitudes

no code implementations16 Apr 2018 Henry Gouk, Bernhard Pfahringer, Eibe Frank, Michael Cree

Effective regularisation of neural networks is essential to combat overfitting due to the large number of parameters involved.

Regularisation of Neural Networks by Enforcing Lipschitz Continuity

1 code implementation12 Apr 2018 Henry Gouk, Eibe Frank, Bernhard Pfahringer, Michael J. Cree

We investigate the effect of explicitly enforcing the Lipschitz continuity of neural networks with respect to their inputs.

Fast Metric Learning For Deep Neural Networks

no code implementations19 Nov 2015 Henry Gouk, Bernhard Pfahringer, Michael Cree

Similarity metrics are a core component of many information retrieval and machine learning systems.

General Classification Information Retrieval +3

Cannot find the paper you are looking for? You can Submit a new open access paper.