Search Results for author: Matteo Spallanzani

Found 6 papers, 2 papers with code

Reducing Neural Architecture Search Spaces with Training-Free Statistics and Computational Graph Clustering

no code implementations29 Apr 2022 Thorir Mar Ingolfsson, Mark Vero, Xiaying Wang, Lorenzo Lamberti, Luca Benini, Matteo Spallanzani

The computational demands of neural architecture search (NAS) algorithms are usually directly proportional to the size of their target search spaces.

Clustering Graph Clustering +1

Training Quantised Neural Networks with STE Variants: the Additive Noise Annealing Algorithm

no code implementations CVPR 2022 Matteo Spallanzani, Gian Paolo Leonardi, Luca Benini

When testing ANA on the CIFAR-10 image classification benchmark, we find that the major impact on task accuracy is not due to the qualitative shape of the regularisations but to the proper synchronisation of the different STE variants used in a network, in accordance with the theoretical results.

Image Classification

Proceedings of the DATE Friday Workshop on System-level Design Methods for Deep Learning on Heterogeneous Architectures (SLOHA 2021)

no code implementations27 Jan 2021 Frank Hannig, Paolo Meloni, Matteo Spallanzani, Matthias Ziegler

This volume contains the papers accepted at the first DATE Friday Workshop on System-level Design Methods for Deep Learning on Heterogeneous Architectures (SLOHA 2021), held virtually on February 5, 2021.

Analytical aspects of non-differentiable neural networks

no code implementations3 Nov 2020 Gian Paolo Leonardi, Matteo Spallanzani

Research in computational deep learning has directed considerable efforts towards hardware-oriented optimisations for deep neural networks, via the simplification of the activation functions, or the quantization of both activations and weights.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.