Search Results for author: Irem Boybat

Found 9 papers, 1 papers with code

AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing

1 code implementation17 May 2023 Hadjer Benmeziane, Corey Lammie, Irem Boybat, Malte Rasch, Manuel Le Gallo, Hsinyu Tsai, Ramachandran Muralidhar, Smail Niar, Ouarnoughi Hamza, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui

Digital processors based on typical von Neumann architectures are not conducive to edge AI given the large amounts of required data movement in and out of memory.

Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics

no code implementations21 Sep 2022 Dominique J. Kösters, Bryan A. Kortman, Irem Boybat, Elena Ferro, Sagar Dolas, Roberto de Austri, Johan Kwisthout, Hans Hilgenkamp, Theo Rasing, Heike Riel, Abu Sebastian, Sascha Caron, Johan H. Mentink

The massive use of artificial neural networks (ANNs), increasingly popular in many areas of scientific computing, rapidly increases the energy consumption of modern high-performance computing systems.

Anomaly Detection Benchmarking

A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End Inference of Real-World Deep Neural Networks

no code implementations4 Jan 2022 Angelo Garofalo, Gianmarco Ottavi, Francesco Conti, Geethan Karunaratne, Irem Boybat, Luca Benini, Davide Rossi

Furthermore, we explore the requirements for end-to-end inference of a full mobile-grade DNN (MobileNetV2) in terms of IMC array resources, by scaling up our heterogeneous architecture to a multi-array accelerator.

AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator

no code implementations10 Nov 2021 Chuteng Zhou, Fernando Garcia Redondo, Julian Büchel, Irem Boybat, Xavier Timoneda Comas, S. R. Nandakumar, Shidhartha Das, Abu Sebastian, Manuel Le Gallo, Paul N. Whatmough

We also describe AON-CiM, a programmable, minimal-area phase-change memory (PCM) analog CiM accelerator, with a novel layer-serial approach to remove the cost of complex interconnects associated with a fully-pipelined design.

Keyword Spotting

ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning

no code implementations25 Mar 2020 Vinay Joshi, Geethan Karunaratne, Manuel Le Gallo, Irem Boybat, Christophe Piveteau, Abu Sebastian, Bipin Rajendran, Evangelos Eleftheriou

Strategies to improve the efficiency of MVM computation in hardware have been demonstrated with minimal impact on training accuracy.

Accurate deep neural network inference using computational phase-change memory

no code implementations7 Jun 2019 Vinay Joshi, Manuel Le Gallo, Irem Boybat, Simon Haefeli, Christophe Piveteau, Martino Dazzi, Bipin Rajendran, Abu Sebastian, Evangelos Eleftheriou

In-memory computing is a promising non-von Neumann approach where certain computational tasks are performed within memory units by exploiting the physical attributes of memory devices.

Emerging Technologies

Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses

no code implementations28 May 2019 S. R. Nandakumar, Irem Boybat, Manuel Le Gallo, Evangelos Eleftheriou, Abu Sebastian, Bipin Rajendran

Combining the computational potential of supervised SNNs with the parallel compute power of computational memory, the work paves the way for next-generation of efficient brain-inspired systems.

Fatiguing STDP: Learning from Spike-Timing Codes in the Presence of Rate Codes

no code implementations17 Jun 2017 Timoleon Moraitis, Abu Sebastian, Irem Boybat, Manuel Le Gallo, Tomas Tuma, Evangelos Eleftheriou

However, some spike-timing-related strengths of SNNs are hindered by the sensitivity of spike-timing-dependent plasticity (STDP) rules to input spike rates, as fine temporal correlations may be obstructed by coarser correlations between firing rates.

Cannot find the paper you are looking for? You can Submit a new open access paper.