no code implementations • 3 Aug 2023 • Sourjya Roy, Cheng Wang, Anand Raghunathan
We devised a cross-layer simulation framework to evaluate the effectiveness of STT-MRAM as a scratchpad replacing SRAM in a systolic-array-based DNN accelerator.
no code implementations • 8 May 2021 • Sourjya Roy, Mustafa Ali, Anand Raghunathan
Processing in memory has been proposed as a promising solution for the memory wall bottleneck for ML workloads.
no code implementations • 5 Mar 2020 • Sourjya Roy, Priyadarshini Panda, Gopalakrishnan Srinivasan, Anand Raghunathan
Our results for VGG-16 trained on CIFAR10 shows that L1 normalization provides the best performance among all the techniques explored in this work with less than 1% drop in accuracy after pruning 80% of the filters compared to the original network.
no code implementations • 25 Feb 2020 • Sourjya Roy, Shrihari Sridharan, Shubham Jain, Anand Raghunathan
To address this challenge, there is a need for tools that can model the functional impact of non-idealities on DNN training and inference.
1 code implementation • 23 Feb 2020 • Sai Aparna Aketi, Sourjya Roy, Anand Raghunathan, Kaushik Roy
To address all the above issues, we present a simple-yet-effective gradual channel pruning while training methodology using a novel data-driven metric referred to as feature relevance score.
1 code implementation • 29 Sep 2016 • Akhilesh Jaiswal, Sourjya Roy, Gopalakrishnan Srinivasan, Kaushik Roy
The efficiency of the human brain in performing classification tasks has attracted considerable research interest in brain-inspired neuromorphic computing.