Search Results for author: Takashi Goda

Found 6 papers, 3 papers with code

Constructing unbiased gradient estimators with finite variance for conditional stochastic optimization

no code implementations4 Jun 2022 Takashi Goda, Wataru Kitade

We study stochastic gradient descent for solving conditional stochastic optimization problems, in which an objective to be minimized is given by a parametric nested expectation with an outer expectation taken with respect to one random variable and an inner conditional expectation with respect to the other random variable.

Stochastic Optimization

Unbiased MLMC stochastic gradient-based optimization of Bayesian experimental designs

1 code implementation18 May 2020 Takashi Goda, Tomohiko Hironaka, Wataru Kitade, Adam Foster

In this paper, applying the idea of randomized multilevel Monte Carlo (MLMC) methods, we introduce an unbiased Monte Carlo estimator for the gradient of the expected information gain with finite expected squared $\ell_2$-norm and finite expected computational cost per sample.

Experimental Design Stochastic Optimization

Toeplitz Monte Carlo

1 code implementation9 Mar 2020 Josef Dick, Takashi Goda, Hiroya Murata

Motivated mainly by applications to partial differential equations with random coefficients, we introduce a new class of Monte Carlo estimators, called Toeplitz Monte Carlo (TMC) estimator for approximating the integral of a multivariate function with respect to the direct product of an identical univariate probability measure.

Numerical Analysis Numerical Analysis Methodology

Efficient Debiased Evidence Estimation by Multilevel Monte Carlo Sampling

no code implementations14 Jan 2020 Kei Ishikawa, Takashi Goda

In this paper, we propose a new stochastic optimization algorithm for Bayesian inference based on multilevel Monte Carlo (MLMC) methods.

Bayesian Inference Stochastic Optimization

Multilevel Monte Carlo estimation of log marginal likelihood

no code implementations23 Dec 2019 Takashi Goda, Kei Ishikawa

In this short note we provide an unbiased multilevel Monte Carlo estimator of the log marginal likelihood and discuss its application to variational Bayes.

Decision-making under uncertainty: using MLMC for efficient estimation of EVPPI

2 code implementations18 Aug 2017 Michael B. Giles, Takashi Goda

In this paper we develop a very efficient approach to the Monte Carlo estimation of the expected value of partial perfect information (EVPPI) that measures the average benefit of knowing the value of a subset of uncertain parameters involved in a decision model.

Numerical Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.