no code implementations • 14 Sep 2023 • Michael Griebel, Peter Oswald
We consider the problem of approximating the regression function from noisy vector-valued data by an online learning algorithm using an appropriate reproducing kernel Hilbert space (RKHS) as prior.
no code implementations • 5 Aug 2021 • Bastian Bohn, Michael Griebel, Dinesh Kannan
In this paper, we propose neural networks that tackle the problems of stability and field-of-view of a Convolutional Neural Network (CNN).
no code implementations • 21 Dec 2020 • Alexandros Gilch, Michael Griebel, Jens Oettershagen
Motivated by the popular Probit and Mixed Logit models, we consider double integrals with a linking function which stems from the considered estimator, e. g. the logarithm for Maximum Likelihood, and apply a sparse tensor product quadrature to reduce the computational effort for the approximation of the combined integral.
Methodology Numerical Analysis Numerical Analysis 62P20, 65D30, 65D32
no code implementations • 15 Oct 2018 • Bastian Bohn, Michael Griebel, Jens Oettershagen
In this paper we propose a preprocessing approach for these adaptive sparse grid algorithms that determines an optimized, problem-dependent coordinate system and, thus, reduces the effective dimensionality of a given data set in the ANOVA sense.
no code implementations • 29 Sep 2017 • Bastian Bohn, Michael Griebel, Christian Rieger
In this paper we provide a finite-sample and an infinite-sample representer theorem for the concatenation of (linear combinations of) kernel functions of reproducing kernel Hilbert spaces.