Search Results for author: Stephane Ayache

Found 9 papers, 1 papers with code

A Learning Paradigm for Interpretable Gradients

no code implementations23 Apr 2024 Felipe Torres Figueroa, Hanwei Zhang, Ronan Sicre, Yannis Avrithis, Stephane Ayache

This paper studies interpretability of convolutional networks by means of saliency maps.

Opti-CAM: Optimizing saliency maps for interpretability

no code implementations17 Jan 2023 Hanwei Zhang, Felipe Torres, Ronan Sicre, Yannis Avrithis, Stephane Ayache

Methods based on class activation maps (CAM) provide a simple mechanism to interpret predictions of convolutional neural networks by using linear combinations of feature maps as saliency maps.

Do Vision-and-Language Transformers Learn Grounded Predicate-Noun Dependencies?

1 code implementation21 Oct 2022 Mitja Nikolaus, Emmanuelle Salin, Stephane Ayache, Abdellah Fourtassi, Benoit Favre

Recent advances in vision-and-language modeling have seen the development of Transformer architectures that achieve remarkable performance on multimodal reasoning tasks.

Image-text matching Language Modelling +2

ChaLearn Looking at People: Inpainting and Denoising challenges

no code implementations24 Jun 2021 Sergio Escalera, Marti Soler, Stephane Ayache, Umut Guclu, Jun Wan, Meysam Madadi, Xavier Baro, Hugo Jair Escalante, Isabelle Guyon

Dealing with incomplete information is a well studied problem in the context of machine learning and computational intelligence.

Denoising Pose Estimation

Sparse matrix products for neural network compression

no code implementations1 Jan 2021 Luc Giffon, Hachem Kadri, Stephane Ayache, Ronan Sicre, Thierry Artieres

Over-parameterization of neural networks is a well known issue that comes along with their great performance.

Neural Network Compression

Distillation of Weighted Automata from Recurrent Neural Networks using a Spectral Approach

no code implementations28 Sep 2020 Remi Eyraud, Stephane Ayache

Moreover, we show how the process provides interesting insights toward the behavior of RNN learned on data, enlarging the scope of this work to the one of explainability of deep learning models.

Knowledge Distillation Language Modelling

Explaining Black Boxes on Sequential Data using Weighted Automata

no code implementations12 Oct 2018 Stephane Ayache, Remi Eyraud, Noe Goudian

Understanding how a learned black box works is of crucial interest for the future of Machine Learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.