Search Results for author: Iacopo Mandatelli

Found 1 papers, 0 papers with code

Faster SGD training by minibatch persistency

no code implementations19 Jun 2018 Matteo Fischetti, Iacopo Mandatelli, Domenico Salvagnin

It is well known that, for most datasets, the use of large-size minibatches for Stochastic Gradient Descent (SGD) typically leads to slow convergence and poor generalization.

Cannot find the paper you are looking for? You can Submit a new open access paper.