Stochastic Optimization
280 papers with code • 12 benchmarks • 11 datasets
Stochastic Optimization is the task of optimizing certain objective functional by generating and using stochastic random variables. Usually the Stochastic Optimization is an iterative process of generating random variables that progressively finds out the minima or the maxima of the objective functional. Stochastic Optimization is usually applied in the non-convex functional spaces where the usual deterministic optimization such as linear or quadratic programming or their variants cannot be used.
Source: ASOC: An Adaptive Parameter-free Stochastic Optimization Techinique for Continuous Variables
Libraries
Use these libraries to find Stochastic Optimization models and implementationsDatasets
Latest papers with no code
Advancing Forest Fire Prevention: Deep Reinforcement Learning for Effective Firebreak Placement
To the best of our knowledge, this study represents a pioneering effort in using Reinforcement Learning to address the aforementioned problem, offering promising perspectives in fire prevention and landscape management
Decision Transformer for Wireless Communications: A New Paradigm of Resource Management
By leveraging the power of DT models learned over extensive datasets, the proposed architecture is expected to achieve rapid convergence with many fewer training epochs and higher performance in a new context, e. g., similar tasks with different state and action spaces, compared with DRL.
Transformer-based Stagewise Decomposition for Large-Scale Multistage Stochastic Optimization
Solving large-scale multistage stochastic programming (MSP) problems poses a significant challenge as commonly used stagewise decomposition algorithms, including stochastic dual dynamic programming (SDDP), face growing time complexity as the subproblem size and problem count increase.
Accelerated Parameter-Free Stochastic Optimization
We propose a method that achieves near-optimal rates for smooth stochastic convex optimization and requires essentially no prior knowledge of problem parameters.
Beyond Suspension: A Two-phase Methodology for Concluding Sports Leagues
Methodology: We propose a data-driven model which exploits predictive and prescriptive analytics to produce a schedule for the remainder of the season comprised of a subset of originally-scheduled games.
Taming the Interacting Particle Langevin Algorithm -- the superlinear case
Recent advances in stochastic optimization have yielded the interactive particle Langevin algorithm (IPLA), which leverages the notion of interacting particle systems (IPS) to efficiently sample from approximate posterior densities.
Differentially Private Distributed Nonconvex Stochastic Optimization with Quantized Communications
This paper proposes a new distributed nonconvex stochastic optimization algorithm that can achieve privacy protection, communication efficiency and convergence simultaneously.
DASA: Delay-Adaptive Multi-Agent Stochastic Approximation
We consider a setting in which $N$ agents aim to speedup a common Stochastic Approximation (SA) problem by acting in parallel and communicating with a central server.
Stochastic Optimization with Constraints: A Non-asymptotic Instance-Dependent Analysis
Our main result is a non-asymptotic guarantee for VRPG algorithm.
A learning-based solution approach to the application placement problem in mobile edge computing under uncertainty
Then, based on the distance features of each user from the available servers and their request rates, machine learning models generate decision variables for the first stage of the stochastic optimization model, which is the user-to-server request allocation, and are employed as independent decision agents that reliably mimic the optimization model.