no code implementations • 10 Mar 2024 • Zijun Long, Lipeng Zhuang, George Killick, Richard McCreadie, Gerardo Aragon Camarasa, Paul Henderson
In this paper, we show that human-labelling errors not only differ significantly from synthetic label errors, but also pose unique challenges in SCL, different to those in traditional supervised learning methods.
no code implementations • 22 Feb 2024 • Zijun Long, George Killick, Lipeng Zhuang, Gerardo Aragon-Camarasa, Zaiqiao Meng, Richard McCreadie
State-of-the-art pre-trained image models predominantly adopt a two-stage approach: initial unsupervised pre-training on large-scale datasets followed by task-specific fine-tuning using Cross-Entropy loss~(CE).
1 code implementation • 3 Dec 2023 • George Killick, Paul Henderson, Paul Siebert, Gerardo Aragon-Camarasa
In this paper, we tackle the challenge of actively attending to visual scenes using a foveated sensor.
no code implementations • 25 Nov 2023 • Zijun Long, George Killick, Lipeng Zhuang, Richard McCreadie, Gerardo Aragon Camarasa, Paul Henderson
However, while the detrimental effects of noisy labels in supervised learning are well-researched, their influence on SCL remains largely unexplored.
1 code implementation • 16 Oct 2023 • Zijun Long, George Killick, Richard McCreadie, Gerardo Aragon Camarasa
Robotic vision applications often necessitate a wide range of visual perception tasks, such as object detection, segmentation, and identification.
1 code implementation • 4 Sep 2023 • Zijun Long, George Killick, Richard McCreadie, Gerardo Aragon Camarasa
As Multimodal Large Language Models (MLLMs) grow in size, adapting them to specialized tasks becomes increasingly challenging due to high computational and memory demands.
no code implementations • 28 Aug 2023 • Zijun Long, George Killick, Richard McCreadie, Gerardo Aragon Camarasa, Zaiqiao Meng
State-of-the-art image models predominantly follow a two-stage strategy: pre-training on large datasets and fine-tuning with cross-entropy loss.