Search Results for author: Guillermo Ortiz-Jimenez

Found 8 papers, 4 papers with code

Pi-DUAL: Using Privileged Information to Distinguish Clean from Noisy Labels

no code implementations10 Oct 2023 Ke Wang, Guillermo Ortiz-Jimenez, Rodolphe Jenatton, Mark Collier, Efi Kokiopoulou, Pascal Frossard

Label noise is a pervasive problem in deep learning that often compromises the generalization performance of trained models.

Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models

1 code implementation NeurIPS 2023 Guillermo Ortiz-Jimenez, Alessandro Favero, Pascal Frossard

Task arithmetic has recently emerged as a cost-effective and scalable approach to edit pre-trained models directly in weight space: By adding the fine-tuned weights of different tasks, the model's performance can be improved on these tasks, while negating them leads to task forgetting.

Disentanglement

When does Privileged Information Explain Away Label Noise?

1 code implementation3 Mar 2023 Guillermo Ortiz-Jimenez, Mark Collier, Anant Nawalgaria, Alexander D'Amour, Jesse Berent, Rodolphe Jenatton, Effrosyni Kokiopoulou

Leveraging privileged information (PI), or features available during training but not at test time, has recently been shown to be an effective method for addressing label noise.

A neural anisotropic view of underspecification in deep learning

no code implementations29 Apr 2021 Guillermo Ortiz-Jimenez, Itamar Franco Salazar-Reque, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

In this work, we propose to study this problem from a geometric perspective with the aim to understand two key characteristics of neural network solutions in underspecified settings: how is the geometry of the learned function related to the data representation?

Fairness Inductive Bias

Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness

no code implementations19 Oct 2020 Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

In this article, we provide an in-depth review of the field of adversarial robustness in deep learning, and give a self-contained introduction to its main notions.

Adversarial Robustness

Neural Anisotropy Directions

2 code implementations NeurIPS 2020 Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

In this work, we analyze the role of the network architecture in shaping the inductive bias of deep classifiers.

Inductive Bias

Hold me tight! Influence of discriminative features on deep network boundaries

1 code implementation NeurIPS 2020 Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

In this work, we borrow tools from the field of adversarial robustness, and propose a new perspective that relates dataset features to the distance of samples to the decision boundary.

Adversarial Robustness

Forward-Backward Splitting for Optimal Transport based Problems

no code implementations20 Sep 2019 Guillermo Ortiz-Jimenez, Mireille El Gheche, Effrosyni Simou, Hermina Petric Maretic, Pascal Frossard

Experiments show that the proposed method leads to a significant improvement in terms of speed and performance with respect to the state of the art for domain adaptation on a continually rotating distribution coming from the standard two moon dataset.

Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.