no code implementations • 4 Oct 2023 • Leonid Berlyand, Etienne Sandier, Yitzchak Shmalo, Lei Zhang
We explore the applications of random matrix theory (RMT) in the training of deep neural networks (DNNs), focusing on layer pruning that is reducing the number of DNN parameters (weights).
1 code implementation • 15 Mar 2023 • Yitzchak Shmalo, Jonathan Jenkins, Oleksii Krupchytskyi
Recently, random matrix theory (RMT) has been applied to the overfitting problem in deep learning.
no code implementations • 16 Oct 2022 • Yitzchak Shmalo
A recent result by Berlyand, Jabin, and Safsten introduces a doubling condition on the training data, which ensures the stability of accuracy during training for DNNs using the absolute value activation function.