Search Results for author: Gonçalo Mordido

Found 12 papers, 3 papers with code

Lookbehind-SAM: k steps back, 1 step forward

no code implementations31 Jul 2023 Gonçalo Mordido, Pranshu Malviya, Aristide Baratin, Sarath Chandar

Sharpness-aware minimization (SAM) methods have gained increasing popularity by formulating the problem of minimizing both loss value and loss sharpness as a minimax objective.

SAMSON: Sharpness-Aware Minimization Scaled by Outlier Normalization for Improving DNN Generalization and Robustness

no code implementations18 Nov 2022 Gonçalo Mordido, Sébastien Henwood, Sarath Chandar, François Leduc-Primeau

In this work, we show that applying sharpness-aware training, by optimizing for both the loss value and loss sharpness, significantly improves robustness to noisy hardware at inference time without relying on any assumptions about the target hardware.

Improving Meta-Learning Generalization with Activation-Based Early-Stopping

1 code implementation3 Aug 2022 Simon Guiroy, Christopher Pal, Gonçalo Mordido, Sarath Chandar

Specifically, we analyze the evolution, during meta-training, of the neural activations at each hidden layer, on a small set of unlabelled support examples from a single task of the target tasks distribution, as this constitutes a minimal and justifiably accessible information from the target problem.

Few-Shot Learning Transfer Learning

MemSE: Fast MSE Prediction for Noisy Memristor-Based DNN Accelerators

no code implementations3 May 2022 Jonathan Kern, Sébastien Henwood, Gonçalo Mordido, Elsa Dupraz, Abdeldjalil Aïssa-El-Bey, Yvon Savaria, François Leduc-Primeau

Memristors enable the computation of matrix-vector multiplications (MVM) in memory and, therefore, show great potential in highly increasing the energy efficiency of deep neural network (DNN) inference accelerators.

Quantization

Compressing 1D Time-Channel Separable Convolutions using Sparse Random Ternary Matrices

no code implementations31 Mar 2021 Gonçalo Mordido, Matthijs Van Keirsbilck, Alexander Keller

We demonstrate that 1x1-convolutions in 1D time-channel separable convolutions may be replaced by constant, sparse random ternary matrices with weights in $\{-1, 0,+1\}$.

speech-recognition Speech Recognition

Evaluating Post-Training Compression in GANs using Locality-Sensitive Hashing

no code implementations22 Mar 2021 Gonçalo Mordido, Haojin Yang, Christoph Meinel

The analysis of the compression effects in generative adversarial networks (GANs) after training, i. e. without any fine-tuning, remains an unstudied, albeit important, topic with the increasing trend of their computation and memory requirements.

Quantization

Improving the Evaluation of Generative Models with Fuzzy Logic

1 code implementation3 Feb 2020 Julian Niedermeier, Gonçalo Mordido, Christoph Meinel

Objective and interpretable metrics to evaluate current artificial intelligent systems are of great importance, not only to analyze the current state of such systems but also to objectively measure progress in the future.

Image Generation

microbatchGAN: Stimulating Diversity with Multi-Adversarial Discrimination

no code implementations10 Jan 2020 Gonçalo Mordido, Haojin Yang, Christoph Meinel

We propose to tackle the mode collapse problem in generative adversarial networks (GANs) by using multiple discriminators and assigning a different portion of each minibatch, called microbatch, to each discriminator.

Instant Quantization of Neural Networks using Monte Carlo Methods

no code implementations29 May 2019 Gonçalo Mordido, Matthijs Van Keirsbilck, Alexander Keller

Low bit-width integer weights and activations are very important for efficient inference, especially with respect to lower power consumption.

Quantization

Dropout-GAN: Learning from a Dynamic Ensemble of Discriminators

no code implementations30 Jul 2018 Gonçalo Mordido, Haojin Yang, Christoph Meinel

We propose to incorporate adversarial dropout in generative multi-adversarial networks, by omitting or dropping out, the feedback of each discriminator in the framework with some probability at the end of each batch.

Cannot find the paper you are looking for? You can Submit a new open access paper.