no code implementations • 17 Feb 2016 • Rodolphe Jenatton, Jim Huang, Dominik Csiba, Cedric Archambeau
We consider online optimization in the 1-lookahead setting, where the objective does not decompose additively over the rounds of the online game.
no code implementations • 6 Feb 2016 • Dominik Csiba, Peter Richtárik
Minibatching is a very well studied and highly popular technique in supervised learning, used by practitioners due to its ability to accelerate training through better utilization of parallel processing power and reduction of stochastic variance.
no code implementations • 7 Jun 2015 • Dominik Csiba, Peter Richtárik
For convex loss functions, our complexity results match those of QUARTZ, which is a primal-dual method also allowing for arbitrary mini-batching schemes.
no code implementations • 27 Feb 2015 • Dominik Csiba, Zheng Qu, Peter Richtárik
This paper introduces AdaSDCA: an adaptive variant of stochastic dual coordinate ascent (SDCA) for solving the regularized empirical risk minimization problems.