Search Results for author: Giuseppe Maria Sarda

Found 2 papers, 1 papers with code

Precision-aware Latency and Energy Balancing on Multi-Accelerator Platforms for DNN Inference

1 code implementation8 Jun 2023 Matteo Risso, Alessio Burrello, Giuseppe Maria Sarda, Luca Benini, Enrico Macii, Massimo Poncino, Marian Verhelst, Daniele Jahier Pagliari

The need to execute Deep Neural Networks (DNNs) at low latency and low power at the edge has spurred the development of new heterogeneous Systems-on-Chips (SoCs) encapsulating a diverse set of hardware accelerators.

Quantization

CoNLoCNN: Exploiting Correlation and Non-Uniform Quantization for Energy-Efficient Low-precision Deep Convolutional Neural Networks

no code implementations31 Jul 2022 Muhammad Abdullah Hanif, Giuseppe Maria Sarda, Alberto Marchisio, Guido Masera, Maurizio Martina, Muhammad Shafique

The high computational complexity of these networks, which translates to increased energy consumption, is the foremost obstacle towards deploying large DNNs in resource-constrained systems.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.