no code implementations • 12 Jul 2023 • Shalini Shrivastava, Vivek Saraswat, Gayatri Dash, Samyak Chakrabarty, Udayan Ganguly
Training deep neural networks (DNNs) is computationally intensive but arrays of non-volatile memories like Charge Trap Flash (CTF) can accelerate DNN operations using in-memory computing.
no code implementations • 10 Jul 2023 • Paritosh Meihar, Rowtu Srinu, Sandip Lashkare, Ajay Kumar Singh, Halid Mulaosmanovic, Veeresh Deshpande, Stefan Dünkel, Sven Beyer, Udayan Ganguly
We show the conventional 1-Bit FeFET, the MirrorBit, and MirrorBit-based Ternary Content-addressable memory (MCAM or MirrorBit-based TCAM) within the same field-programmable array.
no code implementations • 6 Apr 2023 • Paritosh Meihar, Rowtu Srinu, Vivek Saraswat, Sandip Lashkare, Halid Mulaosmanovic, Ajay Kumar Singh, Stefan Dünkel, Sven Beyer, Udayan Ganguly
A TCAD simulation is also presented to explain the origin and working of MirrorBit states based on the FeFET model calibrated using the GlobalFoundries FeFET device.
no code implementations • 20 Jul 2022 • Anmol Biswas, Vivek Saraswat, Udayan Ganguly
Although signed gradient values are a challenge for spike-based representation, we tackle this by splitting the gradient signal into positive and negative streams.
no code implementations • 30 Jun 2021 • Jayesh Choudhary, Vivek Saraswat, Udayan Ganguly
In this work, we aim to devise an end-to-end spiking implementation for contour tracking in 3D media inspired by chemotaxis, where the worm reaches the region which has the given set concentration.
no code implementations • 29 Jun 2021 • Vineet Kotariya, Udayan Ganguly
Thereby demonstrating the potential of this framework for solving such problems in the spiking domain.
no code implementations • 4 May 2021 • Apoorv Kishore, Vivek Saraswat, Udayan Ganguly
C. elegans shows chemotaxis using klinokinesis where the worm senses the concentration based on a single concentration sensor to compute the concentration gradient to perform foraging through gradient ascent/descent towards the target concentration followed by contour tracking.
no code implementations • 29 Apr 2021 • Vivek Saraswat, Ajinkya Gorad, Anand Naik, Aakash Patil, Udayan Ganguly
In this work, we analyze the role of synaptic orders namely: {\delta} (high output for single time step), 0th (rectangular with a finite pulse width), 1st (exponential fall) and 2nd order (exponential rise and fall) and synaptic timescales on the reservoir output response and on the TI-46 spoken digits classification accuracy under a more comprehensive parameter sweep.
no code implementations • 1 Aug 2020 • Shashwat Shukla, Rohan Pathak, Vivek Saraswat, Udayan Ganguly
In particular, we focus on the problem of contour tracking, wherein the bot must reach and subsequently follow a desired concentration setpoint.
no code implementations • 9 Mar 2020 • Varun Bhatt, Shalini Shrivastava, Tanmay Chavan, Udayan Ganguly
The in-memory computing paradigm with emerging memory devices has been recently shown to be a promising way to accelerate deep learning.
no code implementations • 26 Feb 2019 • Tanmay Chavan, Sangya Dutta, Nihar R. Mohapatra, Udayan Ganguly
Neuromorphic engineering implements SNNs in hardware, aspiring to mimic the brain at scale (i. e., 100 billion neurons) with biological area and energy efficiency.
no code implementations • 18 Jan 2019 • Ajinkya Gorad, Vivek Saraswat, Udayan Ganguly
Lyapunov exponent (mu), used to characterize the "non-linearity" of the network, correlates well with LSM performance.
no code implementations • 13 Mar 2018 • Aditya Shukla, Sidharth Prasad, Sandip Lashkare, Udayan Ganguly
As a solution, we propose the use of multiple PCMO-RRAMs in parallel within a synapse.
no code implementations • 8 Sep 2017 • Aditya Shukla, Udayan Ganguly
This enables learning and recognition simultaneously on an SNN.
no code implementations • 6 Apr 2017 • Aditya Shukla, Vinay Kumar, Udayan Ganguly
Spiking Neural Network (SNN) naturally inspires hardware implementation as it is based on biology.
no code implementations • 7 Dec 2016 • Anmol Biswas, Sidharth Prasad, Sandip Lashkare, Udayan Ganguly
Second, we develop a computationally efficient (15000 x) and accurate (correlation of 0. 98) method to evaluate the performance of the network without standard recognition tests.