1 code implementation • 14 Sep 2023 • Guillaume Jeanneret, Loïc Simon, Frédéric Jurie
This paper addresses the challenge of generating Counterfactual Explanations (CEs), involving the identification and modification of the fewest necessary features to alter a classifier's prediction for a given image.
no code implementations • 21 Apr 2023 • Angela Castillo, Maria Escobar, Guillaume Jeanneret, Albert Pumarola, Pablo Arbeláez, Ali Thabet, Artsiom Sanakoyeu
To the best of our knowledge, this is the first approach that uses the reverse diffusion process to model full-body tracking as a conditional sequence generation task.
1 code implementation • CVPR 2023 • Guillaume Jeanneret, Loïc Simon, Frédéric Jurie
Counterfactual explanations and adversarial attacks have a related goal: flipping output labels with minimal perturbations regardless of their characteristics.
1 code implementation • 29 Mar 2022 • Guillaume Jeanneret, Loïc Simon, Frédéric Jurie
Counterfactual explanations have shown promising results as a post-hoc framework to make image classifiers more explainable.
1 code implementation • 26 Aug 2021 • Guillaume Jeanneret, Juan C Perez, Pablo Arbelaez
Adversarial Robustness is a growing field that evidences the brittleness of neural networks.
1 code implementation • 29 Jul 2021 • Juan C. Pérez, Motasem Alfarra, Guillaume Jeanneret, Laura Rueda, Ali Thabet, Bernard Ghanem, Pablo Arbeláez
Deep learning models are prone to being fooled by imperceptible perturbations known as adversarial attacks.
1 code implementation • ECCV 2020 • Juan C. Pérez, Motasem Alfarra, Guillaume Jeanneret, Adel Bibi, Ali Thabet, Bernard Ghanem, Pablo Arbeláez
We revisit the benefits of merging classical vision concepts with deep learning models.
no code implementations • 11 Apr 2019 • Juan Leon Alcazar, Maria A. Bravo, Ali K. Thabet, Guillaume Jeanneret, Thomas Brox, Pablo Arbelaez, Bernard Ghanem
Instance-level video segmentation requires a solid integration of spatial and temporal information.