Search Results for author: Sahar Qaadan

Found 3 papers, 0 papers with code

Multi-Merge Budget Maintenance for Stochastic Gradient Descent SVM Training

no code implementations26 Jun 2018 Sahar Qaadan, Tobias Glasmachers

Budgeted Stochastic Gradient Descent (BSGD) is a state-of-the-art technique for training large-scale kernelized support vector machines.

Dual SVM Training on a Budget

no code implementations26 Jun 2018 Sahar Qaadan, Merlin Schüler, Tobias Glasmachers

We present a dual subspace ascent algorithm for support vector machine training that respects a budget constraint limiting the number of support vectors.

Speeding Up Budgeted Stochastic Gradient Descent SVM Training with Precomputed Golden Section Search

no code implementations26 Jun 2018 Tobias Glasmachers, Sahar Qaadan

Limiting the model size of a kernel support vector machine to a pre-defined budget is a well-established technique that allows to scale SVM learning and prediction to large-scale data.

Cannot find the paper you are looking for? You can Submit a new open access paper.