no code implementations • 4 Jun 2022 • Takashi Goda, Wataru Kitade
We study stochastic gradient descent for solving conditional stochastic optimization problems, in which an objective to be minimized is given by a parametric nested expectation with an outer expectation taken with respect to one random variable and an inner conditional expectation with respect to the other random variable.
1 code implementation • 18 May 2020 • Takashi Goda, Tomohiko Hironaka, Wataru Kitade, Adam Foster
In this paper, applying the idea of randomized multilevel Monte Carlo (MLMC) methods, we introduce an unbiased Monte Carlo estimator for the gradient of the expected information gain with finite expected squared $\ell_2$-norm and finite expected computational cost per sample.
1 code implementation • 9 Mar 2020 • Josef Dick, Takashi Goda, Hiroya Murata
Motivated mainly by applications to partial differential equations with random coefficients, we introduce a new class of Monte Carlo estimators, called Toeplitz Monte Carlo (TMC) estimator for approximating the integral of a multivariate function with respect to the direct product of an identical univariate probability measure.
Numerical Analysis Numerical Analysis Methodology
no code implementations • 14 Jan 2020 • Kei Ishikawa, Takashi Goda
In this paper, we propose a new stochastic optimization algorithm for Bayesian inference based on multilevel Monte Carlo (MLMC) methods.
no code implementations • 23 Dec 2019 • Takashi Goda, Kei Ishikawa
In this short note we provide an unbiased multilevel Monte Carlo estimator of the log marginal likelihood and discuss its application to variational Bayes.
2 code implementations • 18 Aug 2017 • Michael B. Giles, Takashi Goda
In this paper we develop a very efficient approach to the Monte Carlo estimation of the expected value of partial perfect information (EVPPI) that measures the average benefit of knowing the value of a subset of uncertain parameters involved in a decision model.
Numerical Analysis