Search Results for author: Eric Arazo

Found 18 papers, 10 papers with code

ConvLoRA and AdaBN based Domain Adaptation via Self-Training

1 code implementation7 Feb 2024 Sidra Aleem, Julia Dietlmeier, Eric Arazo, Suzanne Little

To further boost adaptation, we utilize Adaptive Batch Normalization (AdaBN) which computes target-specific running statistics and use it along with ConvLoRA.

Domain Adaptation Multi-target Domain Adaptation

Self-Supervised and Semi-Supervised Polyp Segmentation using Synthetic Data

no code implementations22 Jul 2023 Enric Moreu, Eric Arazo, Kevin McGuinness, Noel E. O'Connor

To address both challenges, we leverage synthetic data and propose an end-to-end model for polyp segmentation that integrates real and synthetic data to artificially increase the size of the datasets and aid the training when unlabeled samples are available.

Image-to-Image Translation Segmentation

Joint one-sided synthetic unpaired image translation and segmentation for colorectal cancer prevention

no code implementations20 Jul 2023 Enric Moreu, Eric Arazo, Kevin McGuinness, Noel E. O'Connor

We take advantage of recent one-sided translation models because they use significantly less memory, allowing us to add a segmentation model in the training loop.

Segmentation Translation

Is your noise correction noisy? PLS: Robustness to label noise with two stage detection

2 code implementations10 Oct 2022 Paul Albert, Eric Arazo, Tarun Krishna, Noel E. O'Connor, Kevin McGuinness

Experiments demonstrate the state-of-the-art performance of our Pseudo-Loss Selection (PLS) algorithm on a variety of benchmark datasets including curated data synthetically corrupted with in-distribution and out-of-distribution noise, and two real world web noise datasets.

Pseudo Label

Cardiac Segmentation using Transfer Learning under Respiratory Motion Artifacts

no code implementations20 Sep 2022 Carles Garcia-Cabrera, Eric Arazo, Kathleen M. Curran, Noel E. O'Connor, Kevin McGuinness

Methods that are resilient to artifacts in the cardiac magnetic resonance imaging (MRI) while performing ventricle segmentation, are crucial for ensuring quality in structural and functional analysis of those tissues.

Cardiac Segmentation Transfer Learning

Embedding contrastive unsupervised features to cluster in- and out-of-distribution noise in corrupted image datasets

1 code implementation4 Jul 2022 Paul Albert, Eric Arazo, Noel E. O'Connor, Kevin McGuinness

These noisy samples have been evidenced by previous works to be a mixture of in-distribution (ID) samples, assigned to the incorrect category but presenting similar visual semantics to other classes in the dataset, and out-of-distribution (OOD) images, which share no semantic correlation with any category from the dataset.

Clustering Contrastive Learning +2

Segmentation Enhanced Lameness Detection in Dairy Cows from RGB and Depth Video

no code implementations9 Jun 2022 Eric Arazo, Robin Aly, Kevin McGuinness

Cow lameness is a severe condition that affects the life cycle and life quality of dairy cows and results in considerable economic losses.

How Important is Importance Sampling for Deep Budgeted Training?

1 code implementation27 Oct 2021 Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness

We suggest that, given a specific budget, the best course of action is to disregard the importance and introduce adequate data augmentation; e. g. when reducing the budget to a 30% in CIFAR-10/100, RICAP data augmentation maintains accuracy, while importance sampling does not.

Data Augmentation

Addressing out-of-distribution label noise in webly-labelled data

no code implementations26 Oct 2021 Paul Albert, Diego Ortego, Eric Arazo, Noel O'Connor, Kevin McGuinness

We propose a simple solution to bridge the gap with a fully clean dataset using Dynamic Softening of Out-of-distribution Samples (DSOS), which we design on corrupted versions of the CIFAR-100 dataset, and compare against state-of-the-art algorithms on the web noise perturbated MiniImageNet and Stanford datasets and on real label noise datasets: WebVision 1. 0 and Clothing1M.

Image Classification

The Importance of Importance Sampling for Deep Budgeted Training

no code implementations1 Jan 2021 Eric Arazo, Diego Ortego, Paul Albert, Noel O'Connor, Kevin McGuinness

For example, training in CIFAR-10/100 with 30% of the full training budget, a uniform sampling strategy with certain data augmentation surpasses the performance of 100% budget models trained with standard data augmentation.

Data Augmentation

Multi-Objective Interpolation Training for Robustness to Label Noise

1 code implementation CVPR 2021 Diego Ortego, Eric Arazo, Paul Albert, Noel E. O'Connor, Kevin McGuinness

We further propose a novel label noise detection method that exploits the robust feature representations learned via contrastive learning to estimate per-sample soft-labels whose disagreements with the original labels accurately identify noisy samples.

Contrastive Learning Image Classification +3

Reliable Label Bootstrapping for Semi-Supervised Learning

1 code implementation23 Jul 2020 Paul Albert, Diego Ortego, Eric Arazo, Noel E. O'Connor, Kevin McGuinness

We propose Reliable Label Bootstrapping (ReLaB), an unsupervised preprossessing algorithm which improves the performance of semi-supervised algorithms in extremely low supervision settings.

Self-Supervised Learning

Towards Robust Learning with Different Label Noise Distributions

1 code implementation18 Dec 2019 Diego Ortego, Eric Arazo, Paul Albert, Noel E. O'Connor, Kevin McGuinness

However, we show that different noise distributions make the application of this trick less straightforward and propose to continuously relabel all images to reveal a discriminative loss against multiple distributions.

Memorization Representation Learning

Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning

4 code implementations8 Aug 2019 Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness

In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples.

Image Classification

Unsupervised Label Noise Modeling and Loss Correction

2 code implementations25 Apr 2019 Eric Arazo, Diego Ortego, Paul Albert, Noel E. O'Connor, Kevin McGuinness

Specifically, we propose a beta mixture to estimate this probability and correct the loss by relying on the network prediction (the so-called bootstrapping loss).

Image Classification

On guiding video object segmentation

no code implementations25 Apr 2019 Diego Ortego, Kevin McGuinness, Juan C. SanMiguel, Eric Arazo, José M. Martínez, Noel E. O'Connor

This guiding process relies on foreground masks from independent algorithms (i. e. state-of-the-art algorithms) to implement an attention mechanism that incorporates the spatial location of foreground and background to compute their separated representations.

Foreground Segmentation Object +5

Cannot find the paper you are looking for? You can Submit a new open access paper.