no code implementations • 4 Oct 2023 • Leonid Berlyand, Etienne Sandier, Yitzchak Shmalo, Lei Zhang
We explore the applications of random matrix theory (RMT) in the training of deep neural networks (DNNs), focusing on layer pruning that is reducing the number of DNN parameters (weights).
no code implementations • 4 Jun 2021 • Leonid Berlyand, Robert Creese, Pierre-Emmanuel Jabin
We introduce two-scale loss functions for use in various gradient descent algorithms applied to classification problems via deep neural networks.
no code implementations • 10 Feb 2020 • Leonid Berlyand, Pierre-Emmanuel Jabin, C. Alex Safsten
Our main result consists of two novel conditions on the classifier which, if either is satisfied, ensure stability of training, that is we derive tight bounds on accuracy as loss decreases.