Search Results for author: Md Aamir Raihan

Found 2 papers, 2 papers with code

Sparse Weight Activation Training

1 code implementation NeurIPS 2020 Md Aamir Raihan, Tor M. Aamodt

For ResNet-50 on ImageNet SWAT reduces total floating-point operations (FLOPS) during training by 80% resulting in a 3. 3$\times$ training speedup when run on a simulated sparse learning accelerator representative of emerging platforms while incurring only 1. 63% reduction in validation accuracy.

Image Classification Network Pruning +1

Modeling Deep Learning Accelerator Enabled GPUs

13 code implementations19 Nov 2018 Md Aamir Raihan, Negar Goli, Tor Aamodt

The efficacy of deep learning has resulted in it becoming one of the most important applications run in data centers today.

Mathematical Software Hardware Architecture

Cannot find the paper you are looking for? You can Submit a new open access paper.