no code implementations • 21 Mar 2024 • Dylan Auty, Roy Miles, Benedikt Kolbeinsson, Krystian Mikolajczyk
In this setting, cross-task distillation can be used, enabling the use of any teacher model trained on a different task.
1 code implementation • 10 Mar 2024 • Roy Miles, Ismail Elezi, Jiankang Deng
Knowledge distillation is an effective method for training small and efficient deep learning models.
Ranked #2 on Knowledge Distillation on ImageNet
4 code implementations • 20 Mar 2023 • Roy Miles, Krystian Mikolajczyk
We then show that the normalisation of representations is tightly coupled with the training dynamics of this projector, which can have a large impact on the students performance.
Ranked #4 on Knowledge Distillation on ImageNet
no code implementations • CVPR 2023 • Roy Miles, Mehmet Kerim Yucel, Bruno Manganelli, Albert Saa-Garriga
This paper tackles the problem of semi-supervised video object segmentation on resource-constrained devices, such as mobile phones.
Ranked #6 on Video Object Segmentation on YouTube-VOS 2019
1 code implementation • 1 Dec 2021 • Roy Miles, Adrian Lopez Rodriguez, Krystian Mikolajczyk
Despite the empirical success of knowledge distillation, current state-of-the-art methods are computationally expensive to train, which makes them difficult to adopt in practice.
Classification with Binary Weight Network Knowledge Distillation +1
no code implementations • 25 Oct 2021 • Roy Miles, Krystian Mikolajczyk
We present an efficient alternative to the convolutional layer using cheap spatial transformations.
no code implementations • 16 Aug 2020 • Roy Miles, Krystian Mikolajczyk
In this paper, we propose an approach for filter-level pruning with hierarchical knowledge distillation based on the teacher, teaching-assistant, and student framework.
no code implementations • 9 Jan 2020 • Roy Miles, Krystian Mikolajczyk
Deep neural networks have demonstrated state-of-the-art performance for feature-based image matching through the advent of new large and diverse datasets.