Stochastic Optimization
278 papers with code • 12 benchmarks • 11 datasets
Stochastic Optimization is the task of optimizing certain objective functional by generating and using stochastic random variables. Usually the Stochastic Optimization is an iterative process of generating random variables that progressively finds out the minima or the maxima of the objective functional. Stochastic Optimization is usually applied in the non-convex functional spaces where the usual deterministic optimization such as linear or quadratic programming or their variants cannot be used.
Source: ASOC: An Adaptive Parameter-free Stochastic Optimization Techinique for Continuous Variables
Libraries
Use these libraries to find Stochastic Optimization models and implementationsDatasets
Latest papers
Coupled generator decomposition for fusion of electro- and magnetoencephalography data
Leveraging data from a multisubject, multimodal (electro- and magnetoencephalography (EEG and MEG)) neuroimaging experiment, we demonstrate the efficacy of the framework in identifying common features in response to face perception stimuli, while accommodating modality- and subject-specific variability.
Diffusion Stochastic Optimization for Min-Max Problems
The optimistic gradient method is useful in addressing minimax optimization problems.
Stochastic optimization with arbitrary recurrent data sampling
For obtaining optimal first-order convergence guarantee for stochastic optimization, it is necessary to use a recurrent data sampling algorithm that samples every data point with sufficient frequency.
Enhancing Trade-offs in Privacy, Utility, and Computational Efficiency through MUltistage Sampling Technique (MUST)
We also prove that MUST. WO is equivalent to sampling with replacement in PA.
f-FERM: A Scalable Framework for Robust Fair Empirical Risk Minimization
While numerous constraints and regularization terms have been proposed in the literature to promote fairness in machine learning tasks, most of these methods are not amenable to stochastic optimization due to the complex and nonlinear structure of constraints and regularizers.
Learning From Scenarios for Stochastic Repairable Scheduling
We are interested in a stochastic scheduling problem, in which processing times are uncertain, which brings uncertain values in the constraints, and thus repair of an initial schedule may be needed.
Breaking the Heavy-Tailed Noise Barrier in Stochastic Optimization Problems
We consider stochastic optimization problems with heavy-tailed noise with structured density.
The Acquisition of Physical Knowledge in Generative Neural Networks
As children grow older, they develop an intuitive understanding of the physical processes around them.
AdaSub: Stochastic Optimization Using Second-Order Information in Low-Dimensional Subspaces
We introduce AdaSub, a stochastic optimization algorithm that computes a search direction based on second-order information in a low-dimensional subspace that is defined adaptively based on available current and past information.
Why Do We Need Weight Decay in Modern Deep Learning?
In this work, we highlight that the role of weight decay in modern deep learning is different from its regularization effect studied in classical learning theory.