Search Results for author: Eliav Buchnik

Found 4 papers, 0 papers with code

Graph Learning with Loss-Guided Training

no code implementations31 May 2020 Eliav Buchnik, Edith Cohen

Classically, ML models trained with stochastic gradient descent (SGD) are designed to minimize the average loss per example and use a distribution of training examples that remains {\em static} in the course of training.

Graph Learning

LSH Microbatches for Stochastic Gradients: Value in Rearrangement

no code implementations ICLR 2019 Eliav Buchnik, Edith Cohen, Avinatan Hassidim, Yossi Matias

We make a principled argument for the properties of our arrangements that accelerate the training and present efficient algorithms to generate microbatches that respect the marginal distribution of training examples.

Self-Similar Epochs: Value in Arrangement

no code implementations ICLR 2019 Eliav Buchnik, Edith Cohen, Avinatan Hassidim, Yossi Matias

Optimization of machine learning models is commonly performed through stochastic gradient updates on randomly ordered training examples.

Bootstrapped Graph Diffusions: Exposing the Power of Nonlinearity

no code implementations7 Mar 2017 Eliav Buchnik, Edith Cohen

Classic methods capture the graph structure through some underlying diffusion process that propagates through the graph edges.

Cannot find the paper you are looking for? You can Submit a new open access paper.