1 code implementation • 23 Jun 2023 • Shibhansh Dohare, J. Fernando Hernandez-Garcia, Parash Rahman, A. Rupam Mahmood, Richard S. Sutton
If deep-learning systems are applied in a continual learning setting, then it is well known that they may fail to remember earlier examples.
1 code implementation • 13 Feb 2023 • Bram Grooten, Ghada Sokar, Shibhansh Dohare, Elena Mocanu, Matthew E. Taylor, Mykola Pechenizkiy, Decebal Constantin Mocanu
Tomorrow's robots will need to distinguish useful information from noise when performing different tasks.
1 code implementation • 13 Aug 2021 • Shibhansh Dohare, Richard S. Sutton, A. Rupam Mahmood
The Backprop algorithm for learning in neural networks utilizes two mechanisms: first, stochastic gradient descent and second, initialization with small random weights, where the latter is essential to the effectiveness of the former.
no code implementations • 18 Nov 2019 • Craig Sherstan, Shibhansh Dohare, James Macglashan, Johannes Günther, Patrick M. Pilarski
By using the timescale as one of the estimator's inputs we can estimate value for arbitrary timescales.
1 code implementation • ACL 2018 • Shibhansh Dohare, Vivek Gupta, Harish Karnick
Automatic abstractive summary generation remains a significant open problem for natural language processing.
no code implementations • 9 Jul 2017 • Siddhartha Saxena, Shibhansh Dohare, Jaivardhan Kapoor
Variational inference methods often focus on the problem of efficient model optimization, with little emphasis on the choice of the approximating posterior.
2 code implementations • 6 Jun 2017 • Shibhansh Dohare, Harish Karnick, Vivek Gupta
With an ever increasing size of text present on the Internet, automatic summary generation remains an important problem for natural language understanding.