Search Results for author: Alessandro Capotondi

Found 10 papers, 4 papers with code

Heterogeneous Encoders Scaling In The Transformer For Neural Machine Translation

no code implementations26 Dec 2023 Jia Cheng Hu, Roberto Cavicchioli, Giulia Berardinelli, Alessandro Capotondi

Although the Transformer is currently the best-performing architecture in the homogeneous configuration (self-attention only) in Neural Machine Translation, many State-of-the-Art models in Natural Language Processing are made of a combination of different Deep Learning approaches.

Machine Translation Translation

A request for clarity over the End of Sequence token in the Self-Critical Sequence Training

2 code implementations20 May 2023 Jia Cheng Hu, Roberto Cavicchioli, Alessandro Capotondi

The Image Captioning research field is currently compromised by the lack of transparency and awareness over the End-of-Sequence token (<Eos>) in the Self-Critical Sequence Training.

Image Captioning Sentence

Exploiting Multiple Sequence Lengths in Fast End to End Training for Image Captioning

1 code implementation13 Aug 2022 Jia Cheng Hu, Roberto Cavicchioli, Alessandro Capotondi

We introduce a method called the Expansion mechanism that processes the input unconstrained by the number of elements in the sequence.

Image Captioning

Exploring the sequence length bottleneck in the Transformer for Image Captioning

no code implementations7 Jul 2022 Jia Cheng Hu, Roberto Cavicchioli, Alessandro Capotondi

Most recent state of the art architectures rely on combinations and variations of three approaches: convolutional, recurrent and self-attentive methods.

Image Captioning

A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays

no code implementations20 Oct 2021 Leonardo Ravaglia, Manuele Rusci, Davide Nadalini, Alessandro Capotondi, Francesco Conti, Luca Benini

In this work, we introduce a HW/SW platform for end-to-end CL based on a 10-core FP32-enabled parallel ultra-low-power (PULP) processor.

Continual Learning Quantization

Leveraging Automated Mixed-Low-Precision Quantization for tiny edge microcontrollers

no code implementations12 Aug 2020 Manuele Rusci, Marco Fariselli, Alessandro Capotondi, Luca Benini

The severe on-chip memory limitations are currently preventing the deployment of the most accurate Deep Neural Network (DNN) models on tiny MicroController Units (MCUs), even if leveraging an effective 8-bit quantization scheme.

Quantization

Robustifying the Deployment of tinyML Models for Autonomous mini-vehicles

no code implementations1 Jul 2020 Miguel de Prado, Manuele Rusci, Romain Donze, Alessandro Capotondi, Serge Monnerat, Luca Benini and, Nuria Pazos

We leverage a family of compact and high-throughput tinyCNNs to control the mini-vehicle, which learn in the target environment by imitating a computer vision algorithm, i. e., the expert.

Autonomous Driving Autonomous Navigation +1

Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers

2 code implementations30 May 2019 Manuele Rusci, Alessandro Capotondi, Luca Benini

To fit the memory and computational limitations of resource-constrained edge-devices, we exploit mixed low-bitwidth compression, featuring 8, 4 or 2-bit uniform quantization, and we model the inference graph with integer-only operations.

Quantization

HERO: Heterogeneous Embedded Research Platform for Exploring RISC-V Manycore Accelerators on FPGA

2 code implementations18 Dec 2017 Andreas Kurth, Pirmin Vogel, Alessandro Capotondi, Andrea Marongiu, Luca Benini

Heterogeneous embedded systems on chip (HESoCs) co-integrate a standard host processor with programmable manycore accelerators (PMCAs) to combine general-purpose computing with domain-specific, efficient processing capabilities.

Hardware Architecture Distributed, Parallel, and Cluster Computing

Cannot find the paper you are looking for? You can Submit a new open access paper.