no code implementations • 12 Jun 2023 • Jack Chong, Manas Gupta, Lihui Chen
We also present a full pipeline using EHAP and quantization aware training (QAT), using INT8 QAT to compress the network further after pruning.
no code implementations • 9 Dec 2022 • Manas Gupta, Sarthak Ketanbhai Modi, Hang Zhang, Joon Hei Lee, Joo Hwee Lim
Four of the five Bio-algorithms tested outperform BP by upto 5% accuracy when only 20% of the training dataset is available.
1 code implementation • 29 Sep 2022 • Manas Gupta, Efe Camci, Vishandi Rudy Keneta, Abhishek Vaidyanathan, Ritwik Kanodia, Chuan-Sheng Foo, Wu Min, Lin Jie
Surprisingly, we find that vanilla Global MP performs very well against the SOTA techniques.
1 code implementation • 24 Jan 2022 • Mahsa Paknezhad, Hamsawardhini Rengarajan, Chenghao Yuan, Sujanya Suresh, Manas Gupta, Savitha Ramasamy, Hwee Kuan Lee
Each subset consists of network segments, that can be combined and shared across specific tasks.
1 code implementation • 29 Sep 2021 • Manas Gupta, Vishandi Rudy Keneta, Abhishek Vaidyanathan, Ritwik Kanodia, Efe Camci, Chuan-Sheng Foo, Jie Lin
We showcase that magnitude based pruning, specifically, global magnitude pruning (GP) is sufficient to achieve SOTA performance on a range of neural network architectures.
no code implementations • 9 Jul 2020 • Manas Gupta, Siddharth Aravindan, Aleksandra Kalisz, Vijay Chandrasekhar, Lin Jie
PuRL achieves more than 80% sparsity on the ResNet-50 model while retaining a Top-1 accuracy of 75. 37% on the ImageNet dataset.