Search Results for author: Manuele Rusci

Found 13 papers, 6 papers with code

Multi-resolution Rescored ByteTrack for Video Object Detection on Ultra-low-power Embedded Systems

1 code implementation17 Apr 2024 Luca Bompani, Manuele Rusci, Daniele Palossi, Francesco Conti, Luca Benini

This paper introduces Multi-Resolution Rescored Byte-Track (MR2-ByteTrack), a novel video object detection framework for ultra-low-power embedded processors.

Object object-detection +1

Few-Shot Open-Set Learning for On-Device Customization of KeyWord Spotting Systems

1 code implementation3 Jun 2023 Manuele Rusci, Tinne Tuytelaars

A personalized KeyWord Spotting (KWS) pipeline typically requires the training of a Deep Learning model on a large set of user-defined speech utterances, preventing fast customization directly applied on-device.

Few-Shot Learning Keyword Spotting +1

Reduced Precision Floating-Point Optimization for Deep Neural Network On-Device Learning on MicroControllers

1 code implementation30 May 2023 Davide Nadalini, Manuele Rusci, Luca Benini, Francesco Conti

Enabling On-Device Learning (ODL) for Ultra-Low-Power Micro-Controller Units (MCUs) is a key step for post-deployment adaptation and fine-tuning of Deep Neural Network (DNN) models in future TinyML applications.

Continual Learning Image Classification +1

Accelerating RNN-based Speech Enhancement on a Multi-Core MCU with Mixed FP16-INT8 Post-Training Quantization

no code implementations14 Oct 2022 Manuele Rusci, Marco Fariselli, Martin Croome, Francesco Paci, Eric Flamand

Differently from a uniform 8-bit quantization that degrades the PESQ score by 0. 3 on average, the Mixed-Precision PTQ scheme leads to a low-degradation of only 0. 06, while achieving a 1. 4-1. 7x memory saving.

Quantization Speech Enhancement

A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays

no code implementations20 Oct 2021 Leonardo Ravaglia, Manuele Rusci, Davide Nadalini, Alessandro Capotondi, Francesco Conti, Luca Benini

In this work, we introduce a HW/SW platform for end-to-end CL based on a 10-core FP32-enabled parallel ultra-low-power (PULP) processor.

Continual Learning Quantization

Leveraging Automated Mixed-Low-Precision Quantization for tiny edge microcontrollers

no code implementations12 Aug 2020 Manuele Rusci, Marco Fariselli, Alessandro Capotondi, Luca Benini

The severe on-chip memory limitations are currently preventing the deployment of the most accurate Deep Neural Network (DNN) models on tiny MicroController Units (MCUs), even if leveraging an effective 8-bit quantization scheme.

Quantization

Robustifying the Deployment of tinyML Models for Autonomous mini-vehicles

no code implementations1 Jul 2020 Miguel de Prado, Manuele Rusci, Romain Donze, Alessandro Capotondi, Serge Monnerat, Luca Benini and, Nuria Pazos

We leverage a family of compact and high-throughput tinyCNNs to control the mini-vehicle, which learn in the target environment by imitating a computer vision algorithm, i. e., the expert.

Autonomous Driving Autonomous Navigation +1

PULP-NN: Accelerating Quantized Neural Networks on Parallel Ultra-Low-Power RISC-V Processors

1 code implementation29 Aug 2019 Angelo Garofalo, Manuele Rusci, Francesco Conti, Davide Rossi, Luca Benini

We present PULP-NN, an optimized computing library for a parallel ultra-low-power tightly coupled cluster of RISC-V processors.

Quantization

Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers

2 code implementations30 May 2019 Manuele Rusci, Alessandro Capotondi, Luca Benini

To fit the memory and computational limitations of resource-constrained edge-devices, we exploit mixed low-bitwidth compression, featuring 8, 4 or 2-bit uniform quantization, and we model the inference graph with integer-only operations.

Quantization

Design Automation for Binarized Neural Networks: A Quantum Leap Opportunity?

no code implementations21 Nov 2017 Manuele Rusci, Lukas Cavigelli, Luca Benini

Design automation in general, and in particular logic synthesis, can play a key role in enabling the design of application-specific Binarized Neural Networks (BNN).

Cannot find the paper you are looking for? You can Submit a new open access paper.