Search Results for author: Alice Schoenauer-Sebag

Found 2 papers, 1 papers with code

Stochastic Gradient Descent: Going As Fast As Possible But Not Faster

no code implementations5 Sep 2017 Alice Schoenauer-Sebag, Marc Schoenauer, Michèle Sebag

When applied to training deep neural networks, stochastic gradient descent (SGD) often incurs steady progression phases, interrupted by catastrophic episodes in which loss and gradient norm explode.

Change Point Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.