Stochastic Optimization

282 papers with code • 12 benchmarks • 11 datasets

Stochastic Optimization is the task of optimizing certain objective functional by generating and using stochastic random variables. Usually the Stochastic Optimization is an iterative process of generating random variables that progressively finds out the minima or the maxima of the objective functional. Stochastic Optimization is usually applied in the non-convex functional spaces where the usual deterministic optimization such as linear or quadratic programming or their variants cannot be used.

Source: ASOC: An Adaptive Parameter-free Stochastic Optimization Techinique for Continuous Variables

Libraries

Use these libraries to find Stochastic Optimization models and implementations

Most implemented papers

Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning

ruqizhang/csgmcmc ICLR 2020

The posteriors over neural network weights are high dimensional and multimodal.

Federated Learning over Wireless Networks: Convergence Analysis and Resource Allocation

CharlieDinh/FEDL 29 Oct 2019

There is an increasing interest in a fast-growing machine learning technique called Federated Learning, in which the model training is distributed over mobile user equipments (UEs), exploiting UEs' local computation and training data.

Personalized Federated Learning with Moreau Envelopes

CharlieDinh/pFedMe NeurIPS 2020

Federated learning (FL) is a decentralized and privacy-preserving machine learning technique in which a group of clients collaborate with a server to learn a global model without sharing clients' data.

Large-scale Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification

Optimization-AI/LibAUC ICCV 2021

Our studies demonstrate that the proposed DAM method improves the performance of optimizing cross-entropy loss by a large margin, and also achieves better performance than optimizing the existing AUC square loss on these medical image classification tasks.

Convex Optimization: Algorithms and Complexity

stephenbeckr/AIMS 20 May 2014

In stochastic optimization we discuss stochastic gradient descent, mini-batches, random coordinate descent, and sublinear algorithms.

Deep Generalized Canonical Correlation Analysis

adrianbenton/dgcca-py3 WS 2019

We present Deep Generalized Canonical Correlation Analysis (DGCCA) -- a method for learning nonlinear transformations of arbitrarily many views of data, such that the resulting transformations are maximally informative of each other.

Online Learning Rate Adaptation with Hypergradient Descent

gbaydin/hypergradient-descent ICLR 2018

We introduce a general method for improving the convergence rate of gradient-based optimizers that is easy to implement and works well in practice.

SpectralNet: Spectral Clustering using Deep Neural Networks

kstant0725/SpectralNet ICLR 2018

Moreover, the map learned by SpectralNet naturally generalizes the spectral embedding to unseen data points.

Shampoo: Preconditioned Stochastic Tensor Optimization

kazukiosawa/asdfghjkl ICML 2018

Preconditioned gradient methods are among the most general and powerful tools in optimization.

A PID Controller Approach for Stochastic Optimization of Deep Networks

tensorboy/PIDOptimizer CVPR 2018

We first reveal the intrinsic connections between SGD-Momentum and PID based controller, then present the optimization algorithm which exploits the past, current, and change of gradients to update the network parameters.