no code implementations • 4 Jun 2022 • Takashi Goda, Wataru Kitade
We study stochastic gradient descent for solving conditional stochastic optimization problems, in which an objective to be minimized is given by a parametric nested expectation with an outer expectation taken with respect to one random variable and an inner conditional expectation with respect to the other random variable.
1 code implementation • 18 May 2020 • Takashi Goda, Tomohiko Hironaka, Wataru Kitade, Adam Foster
In this paper, applying the idea of randomized multilevel Monte Carlo (MLMC) methods, we introduce an unbiased Monte Carlo estimator for the gradient of the expected information gain with finite expected squared $\ell_2$-norm and finite expected computational cost per sample.