Search Results for author: Sanchari Sen

Found 7 papers, 2 papers with code

InterTrain: Accelerating DNN Training using Input Interpolation

no code implementations29 Sep 2021 Sarada Krithivasan, Swagath Venkataramani, Sanchari Sen, Anand Raghunathan

This is because the efficacy of learning on interpolated inputs is reduced by the interference between the forward/backward propagation of their constituent inputs.

Specialized Transformers: Faster, Smaller and more Accurate NLP Models

no code implementations29 Sep 2021 Amrit Nagarajan, Sanchari Sen, Jacob R. Stevens, Anand Raghunathan

We propose a Specialization framework to create optimized transformer models for a given downstream task.

Hard Attention Quantization

Accelerating DNN Training through Selective Localized Learning

no code implementations1 Jan 2021 Sarada Krithivasan, Sanchari Sen, Swagath Venkataramani, Anand Raghunathan

The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary betweenSGD and localized updates is shifted in future epochs.

AxFormer: Accuracy-driven Approximation of Transformers for Faster, Smaller and more Accurate NLP Models

1 code implementation7 Oct 2020 Amrit Nagarajan, Sanchari Sen, Jacob R. Stevens, Anand Raghunathan

We propose AxFormer, a systematic framework that applies accuracy-driven approximations to create optimized transformer models for a given downstream task.

Hard Attention Quantization +1

Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural Networks

no code implementations14 Jun 2020 Sarada Krithivasan, Sanchari Sen, Anand Raghunathan

We also evaluate the impact of the attack on a sparsity-optimized DNN accelerator and demonstrate degradations up to 1. 59x in latency, and also study the performance of the attack on a sparsity-optimized general-purpose processor.

Computational Efficiency Quantization

EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks

1 code implementation ICLR 2020 Sanchari Sen, Balaraman Ravindran, Anand Raghunathan

Our results indicate that EMPIR boosts the average adversarial accuracies by 42. 6%, 15. 2% and 10. 5% for the DNN models trained on the MNIST, CIFAR-10 and ImageNet datasets respectively, when compared to single full-precision models, without sacrificing accuracy on the unperturbed inputs.

Self-Driving Cars

SparCE: Sparsity aware General Purpose Core Extensions to Accelerate Deep Neural Networks

no code implementations7 Nov 2017 Sanchari Sen, Shubham Jain, Swagath Venkataramani, Anand Raghunathan

SparCE consists of 2 key micro-architectural enhancements- a Sparsity Register File (SpRF) that tracks zero registers and a Sparsity aware Skip Address (SASA) table that indicates instructions to be skipped.

Attribute

Cannot find the paper you are looking for? You can Submit a new open access paper.