Search Results for author: Kunal Banerjee

Found 9 papers, 3 papers with code

Detecting Concept Drift in the Presence of Sparsity -- A Case Study of Automated Change Risk Assessment System

no code implementations27 Jul 2022 Vishwas Choudhary, Binay Gupta, Anirban Chatterjee, Subhadip Paul, Kunal Banerjee, Vijay Agneeswaran

In this work, we carry out a systematic study of the following: (i) different patterns of missing values, (ii) various statistical and ML based data imputation methods for different kinds of sparsity, (iii) several concept drift detection methods, (iv) practical analysis of the various drift detection metrics, (v) selecting the best concept drift detector given a dataset with missing values based on the different metrics.

Imputation

Exploring Alternatives to Softmax Function

1 code implementation23 Nov 2020 Kunal Banerjee, Vishak Prasad C, Rishi Raj Gupta, Karthik Vyas, Anushree H, Biswajit Mishra

Softmax function is widely used in artificial neural networks for multiclass classification, multilabel classification, attention mechanisms, etc.

General Classification Image Classification

K-TanH: Efficient TanH For Deep Learning

no code implementations17 Sep 2019 Abhisek Kundu, Alex Heinecke, Dhiraj Kalamkar, Sudarshan Srinivasan, Eric C. Qin, Naveen K. Mellempudi, Dipankar Das, Kunal Banerjee, Bharat Kaul, Pradeep Dubey

We propose K-TanH, a novel, highly accurate, hardware efficient approximation of popular activation function TanH for Deep Learning.

Translation

Anatomy Of High-Performance Deep Learning Convolutions On SIMD Architectures

2 code implementations16 Aug 2018 Evangelos Georganas, Sasikanth Avancha, Kunal Banerjee, Dhiraj Kalamkar, Greg Henry, Hans Pabst, Alexander Heinecke

Convolution layers are prevalent in many classes of deep neural networks, including Convolutional Neural Networks (CNNs) which provide state-of-the-art results for tasks like image recognition, neural machine translation and speech recognition.

Distributed, Parallel, and Cluster Computing

Ternary Residual Networks

no code implementations15 Jul 2017 Abhisek Kundu, Kunal Banerjee, Naveen Mellempudi, Dheevatsa Mudigere, Dipankar Das, Bharat Kaul, Pradeep Dubey

Aided by such an elegant trade-off between accuracy and compute, the 8-2 model (8-bit activations, ternary weights), enhanced by ternary residual edges, turns out to be sophisticated enough to achieve very high accuracy ($\sim 1\%$ drop from our FP-32 baseline), despite $\sim 1. 6\times$ reduction in model size, $\sim 26\times$ reduction in number of multiplications, and potentially $\sim 2\times$ power-performance gain comparing to 8-8 representation, on the state-of-the-art deep network ResNet-101 pre-trained on ImageNet dataset.

Cannot find the paper you are looking for? You can Submit a new open access paper.