no code implementations • 9 Apr 2024 • Sidra Aleem, Fangyijie Wang, Mayug Maniparambil, Eric Arazo, Julia Dietlmeier, Kathleen Curran, Noel E. O'Connor, Suzanne Little
Finally, SAM is prompted by the retrieved ROI to segment a specific organ.
1 code implementation • 7 Feb 2024 • Sidra Aleem, Julia Dietlmeier, Eric Arazo, Suzanne Little
To further boost adaptation, we utilize Adaptive Batch Normalization (AdaBN) which computes target-specific running statistics and use it along with ConvLoRA.
no code implementations • 22 Jul 2023 • Enric Moreu, Eric Arazo, Kevin McGuinness, Noel E. O'Connor
To address both challenges, we leverage synthetic data and propose an end-to-end model for polyp segmentation that integrates real and synthetic data to artificially increase the size of the datasets and aid the training when unlabeled samples are available.
no code implementations • 20 Jul 2023 • Enric Moreu, Eric Arazo, Kevin McGuinness, Noel E. O'Connor
We take advantage of recent one-sided translation models because they use significantly less memory, allowing us to add a segmentation model in the training loop.
1 code implementation • 22 Jan 2023 • Tarun Krishna, Ayush K Rai, Alexandru Drimbarean, Eric Arazo, Paul Albert, Alan F Smeaton, Kevin McGuinness, Noel E O'Connor
Computationally expensive training strategies make self-supervised learning (SSL) impractical for resource constrained industrial settings.
2 code implementations • 10 Oct 2022 • Paul Albert, Eric Arazo, Tarun Krishna, Noel E. O'Connor, Kevin McGuinness
Experiments demonstrate the state-of-the-art performance of our Pseudo-Loss Selection (PLS) algorithm on a variety of benchmark datasets including curated data synthetically corrupted with in-distribution and out-of-distribution noise, and two real world web noise datasets.
no code implementations • 20 Sep 2022 • Carles Garcia-Cabrera, Eric Arazo, Kathleen M. Curran, Noel E. O'Connor, Kevin McGuinness
Methods that are resilient to artifacts in the cardiac magnetic resonance imaging (MRI) while performing ventricle segmentation, are crucial for ensuring quality in structural and functional analysis of those tissues.
1 code implementation • 4 Jul 2022 • Paul Albert, Eric Arazo, Noel E. O'Connor, Kevin McGuinness
These noisy samples have been evidenced by previous works to be a mixture of in-distribution (ID) samples, assigned to the incorrect category but presenting similar visual semantics to other classes in the dataset, and out-of-distribution (OOD) images, which share no semantic correlation with any category from the dataset.
no code implementations • 9 Jun 2022 • Eric Arazo, Robin Aly, Kevin McGuinness
Cow lameness is a severe condition that affects the life cycle and life quality of dairy cows and results in considerable economic losses.
1 code implementation • 27 Oct 2021 • Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness
We suggest that, given a specific budget, the best course of action is to disregard the importance and introduce adequate data augmentation; e. g. when reducing the budget to a 30% in CIFAR-10/100, RICAP data augmentation maintains accuracy, while importance sampling does not.
no code implementations • 26 Oct 2021 • Paul Albert, Diego Ortego, Eric Arazo, Noel O'Connor, Kevin McGuinness
We propose a simple solution to bridge the gap with a fully clean dataset using Dynamic Softening of Out-of-distribution Samples (DSOS), which we design on corrupted versions of the CIFAR-100 dataset, and compare against state-of-the-art algorithms on the web noise perturbated MiniImageNet and Stanford datasets and on real label noise datasets: WebVision 1. 0 and Clothing1M.
no code implementations • 1 Jan 2021 • Eric Arazo, Diego Ortego, Paul Albert, Noel O'Connor, Kevin McGuinness
For example, training in CIFAR-10/100 with 30% of the full training budget, a uniform sampling strategy with certain data augmentation surpasses the performance of 100% budget models trained with standard data augmentation.
1 code implementation • CVPR 2021 • Diego Ortego, Eric Arazo, Paul Albert, Noel E. O'Connor, Kevin McGuinness
We further propose a novel label noise detection method that exploits the robust feature representations learned via contrastive learning to estimate per-sample soft-labels whose disagreements with the original labels accurately identify noisy samples.
Ranked #21 on Image Classification on mini WebVision 1.0
1 code implementation • 23 Jul 2020 • Paul Albert, Diego Ortego, Eric Arazo, Noel E. O'Connor, Kevin McGuinness
We propose Reliable Label Bootstrapping (ReLaB), an unsupervised preprossessing algorithm which improves the performance of semi-supervised algorithms in extremely low supervision settings.
1 code implementation • 18 Dec 2019 • Diego Ortego, Eric Arazo, Paul Albert, Noel E. O'Connor, Kevin McGuinness
However, we show that different noise distributions make the application of this trick less straightforward and propose to continuously relabel all images to reveal a discriminative loss against multiple distributions.
4 code implementations • 8 Aug 2019 • Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness
In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples.
2 code implementations • 25 Apr 2019 • Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness
Specifically, we propose a beta mixture to estimate this probability and correct the loss by relying on the network prediction (the so-called bootstrapping loss).
Ranked #44 on Image Classification on Clothing1M
no code implementations • 25 Apr 2019 • Diego Ortego, Kevin McGuinness, Juan C. SanMiguel, Eric Arazo, José M. Martínez, Noel E. O'Connor
This guiding process relies on foreground masks from independent algorithms (i. e. state-of-the-art algorithms) to implement an attention mechanism that incorporates the spatial location of foreground and background to compute their separated representations.