no code implementations • 27 Oct 2023 • Fady Rezk, Antreas Antoniou, Henry Gouk, Timothy Hospedales
We analyze VeLO (versatile learned optimizer), the largest scale attempt to train a general purpose "foundational" optimizer to date.
no code implementations • 6 Jul 2023 • Luísa Shimabucoro, Timothy Hospedales, Henry Gouk
Numerous benchmarks for Few-Shot Learning have been proposed in the last decade.
1 code implementation • CVPR 2023 • Ondrej Bohdal, Yinbing Tian, Yongshuo Zong, Ruchika Chavhan, Da Li, Henry Gouk, Li Guo, Timothy Hospedales
Meta-learning and other approaches to few-shot learning are widely studied for image recognition, and are increasingly applied to other vision tasks such as pose estimation and dense prediction.
no code implementations • 17 Apr 2023 • Vithya Yogarajan, Gillian Dobbie, Henry Gouk
An indigenous perspective on the effectiveness of debiasing techniques for pre-trained language models (PLMs) is presented in this paper.
1 code implementation • 24 Feb 2023 • Ruchika Chavhan, Henry Gouk, Jan Stuehmer, Calum Heggan, Mehrdad Yaghoobi, Timothy Hospedales
Contrastive self-supervised learning methods famously produce high quality transferable representations by learning invariances to different data augmentations.
no code implementations • ICCV 2023 • Ruchika Chavhan, Henry Gouk, Da Li, Timothy Hospedales
Notably, the augmentations used in both supervised and self-supervised training lead to features with high invariance to spatial and appearance transformations.
1 code implementation • 1 Aug 2022 • Panagiotis Eustratiadis, Henry Gouk, Da Li, Timothy Hospedales
This paper investigates a family of methods for defending against adversarial attacks that owe part of their success to creating a noisy, discontinuous, or otherwise rugged loss landscape that adversaries find difficult to navigate.
no code implementations • 17 Jul 2022 • Ruchika Chavhan, Henry Gouk, Jan Stühmer, Timothy Hospedales
Providing invariances in a given learning task conveys a key inductive bias that can lead to sample-efficient learning and good generalisation, if correctly specified.
no code implementations • 15 Jun 2022 • Adrian El Baz, Ihsan Ullah, Edesio Alcobaça, André C. P. L. F. Carvalho, Hong Chen, Fabio Ferreira, Henry Gouk, Chaoyu Guan, Isabelle Guyon, Timothy Hospedales, Shell Hu, Mike Huisman, Frank Hutter, Zhengying Liu, Felix Mohr, Ekrem Öztürk, Jan N. van Rijn, Haozhe Sun, Xin Wang, Wenwu Zhu
Although deep neural networks are capable of achieving performance superior to humans on various tasks, they are notorious for requiring large amounts of data and computing resources, restricting their success to domains where such resources are available.
no code implementations • 5 Mar 2022 • Boyan Gao, Henry Gouk, Hae Beom Lee, Timothy M. Hospedales
The resulting framework, termed Meta Mirror Descent (MetaMD), learns to accelerate optimisation speed.
no code implementations • 1 Feb 2022 • Da Li, Henry Gouk, Timothy Hospedales
However much of the work in general purpose DG is heuristically motivated, as the DG problem is hard to model formally; and recent evaluations have cast doubt on existing methods' practical efficacy -- in particular compared to a well tuned empirical risk minimisation baseline.
1 code implementation • 22 Nov 2021 • Linus Ericsson, Henry Gouk, Timothy M. Hospedales
We show that learned invariances strongly affect downstream task performance and confirm that different downstream tasks benefit from polar opposite (in)variances, leading to performance loss when the standard augmentation strategy is used.
no code implementations • 18 Oct 2021 • Linus Ericsson, Henry Gouk, Chen Change Loy, Timothy M. Hospedales
Self-supervised representation learning methods aim to provide powerful deep feature learning without the requirement of large annotated datasets, thus alleviating the annotation bottleneck that is one of the main barriers to practical deployment of deep learning today.
no code implementations • 9 Oct 2021 • Jack Geary, Henry Gouk, Subramanian Ramamoorthy
Safe interaction between vehicles requires the ability to choose actions that reveal the preferences of the other vehicles.
no code implementations • 29 Sep 2021 • Boyan Gao, Henry Gouk, Yongxin Yang, Timothy Hospedales
We take a different approach, and explore the impact of the ERM loss function on out-of-domain generalisation.
no code implementations • ICCV 2021 • Boyan Gao, Henry Gouk, Timothy M. Hospedales
We present a "learning to learn" approach for automatically constructing white-box classification loss functions that are robust to label noise in the training data.
2 code implementations • ICCV 2021 • Xueting Zhang, Debin Meng, Henry Gouk, Timothy Hospedales
Current state-of-the-art few-shot learners focus on developing effective training procedures for feature representations, before using simple, e. g. nearest centroid, classifiers.
Ranked #6 on Few-Shot Image Classification on Meta-Dataset
cross-domain few-shot learning Few-Shot Image Classification
1 code implementation • CVPR 2021 • Linus Ericsson, Henry Gouk, Timothy M. Hospedales
We evaluate the transfer performance of 13 top self-supervised models on 40 downstream tasks, including many-shot and few-shot recognition, object detection, and dense prediction.
1 code implementation • 17 Oct 2020 • Panagiotis Eustratiadis, Henry Gouk, Da Li, Timothy Hospedales
Stochastic Neural Networks (SNNs) that inject noise into their hidden layers have recently been shown to achieve strong robustness against adversarial attacks.
no code implementations • 22 Jun 2020 • Linus Ericsson, Henry Gouk, Timothy M. Hospedales
We show that by learning Bayesian instance weights for the unlabelled data, we can improve the downstream classification accuracy by prioritising the most useful instances.
1 code implementation • ICLR 2021 • Henry Gouk, Timothy M. Hospedales, Massimiliano Pontil
Our bound is highly relevant for fine-tuning, because providing a network with a good initialisation based on transfer learning means that learning can modify the weights less, and hence achieve tighter generalisation.
no code implementations • ICLR 2020 • Henry Gouk, Timothy M. Hospedales
Existing Lipschitz-based provable defences to adversarial examples only cover the L2 threat model.
no code implementations • 17 Oct 2019 • Boyan Gao, Yongxin Yang, Henry Gouk, Timothy M. Hospedales
We address the problem of simultaneously learning a k-means clustering and deep feature representation from unlabelled data, which is of interest due to the potential of deep k-means to outperform traditional two-step feature extraction and shallow-clustering strategies.
1 code implementation • 23 Jan 2019 • Henry Gouk, Bernhard Pfahringer, Eibe Frank
We present an algorithm for learning decision trees using stochastic gradient information as the source of supervision.
no code implementations • 16 Apr 2018 • Henry Gouk, Bernhard Pfahringer, Eibe Frank, Michael Cree
Effective regularisation of neural networks is essential to combat overfitting due to the large number of parameters involved.
1 code implementation • 12 Apr 2018 • Henry Gouk, Eibe Frank, Bernhard Pfahringer, Michael J. Cree
We investigate the effect of explicitly enforcing the Lipschitz continuity of neural networks with respect to their inputs.
no code implementations • 19 Nov 2015 • Henry Gouk, Bernhard Pfahringer, Michael Cree
Similarity metrics are a core component of many information retrieval and machine learning systems.