no code implementations • 25 Apr 2023 • Aditya Challa, Snehanshu Saha, Soma Dhavala
We argue that between the choice of having a minimum calibration error on original distribution which increases across distortions or having a (possibly slightly higher) calibration error which is constant across distortions, we prefer the latter We hypothesize that the reason for unreliability of deep networks is - The way neural networks are currently trained, the probabilities do not generalize across small distortions.
no code implementations • 7 Apr 2023 • Pronoma Banerjee, Manasi V Gude, Rajvi J Sampat, Sharvari M Hedaoo, Soma Dhavala, Snehanshu Saha
The "ABC-GAN" framework introduced is a novel generative modeling paradigm, which combines Generative Adversarial Networks (GANs) and Approximate Bayesian Computation (ABC).
no code implementations • 17 Feb 2023 • Snehanshu Saha, Jyotirmoy Sarkar, Soma Dhavala, Santonu Sarkar, Preyank Mota
In particular, we propose Parametric Elliot Function (PEF) as an activation function (AF) inside LSTM, which saturates lately compared to sigmoid and tanh.
no code implementations • 8 May 2022 • Omatharv Bharat Vaidya, Rithvik Terence DSouza, Snehanshu Saha, Soma Dhavala, Swagatam Das
We introduce the Hamiltonian Monte Carlo Particle Swarm Optimizer (HMC-PSO), an optimization algorithm that reaps the benefits of both Exponentially Averaged Momentum PSO and HMC sampling.
1 code implementation • 9 Feb 2021 • Anuj Tambwekar, Anirudh Maiya, Soma Dhavala, Snehanshu Saha
We quantify the uncertainty of the class probabilities in terms of prediction intervals, and develop individualized confidence scores that can be used to decide whether a prediction is reliable or not at scoring time.