2 code implementations • 19 May 2020 • Rohan Mohapatra, Snehanshu Saha, Carlos A. Coello Coello, Anwesh Bhattacharya, Soma S. Dhavala, Sriparna Saha
This paper introduces AdaSwarm, a novel gradient-free optimizer which has similar or even better performance than the Adam optimizer adopted in neural networks.
no code implementations • 19 May 2020 • Snehanshu Saha, Tejas Prashanth, Suraj Aralihalli, Sumedh Basarkod, T. S. B Sudarshan, Soma S. Dhavala
We propose a theoretical framework for an adaptive learning rate policy for the Mean Absolute Error loss function and Quantile loss function and evaluate its effectiveness for regression tasks.
1 code implementation • 1 Jan 2020 • Shakkeel Ahmed, Ravi S. Mula, Soma S. Dhavala
Machine Learning and Artificial Intelligence are considered an integral part of the Fourth Industrial Revolution.