no code implementations • 6 Mar 2023 • Chi-Ken Lu
On the basis of multivariate Edgeworth expansion, we propose a non-Gaussian distribution in differential form to model a finite set of outputs from a random neural network, and derive the corresponding marginal and conditional properties.
no code implementations • 14 Mar 2022 • Chi-Ken Lu, Patrick Shafto
With Bochner's theorem, DGP with squared exponential kernel can be viewed as a deep trigonometric network consisting of the random feature layers, sine and cosine activation units, and random weight layers.
1 code implementation • 1 Oct 2021 • Chi-Ken Lu, Patrick Shafto
Preliminary extrapolation results demonstrate expressive power from the depth of hierarchy by exploiting the exact covariance and hyperdata learning, in comparison with GP kernel composition, DGP variational inference and deep kernel learning.
1 code implementation • 7 Feb 2020 • Chi-Ken Lu, Patrick Shafto
Recently, [1] pointed out that the hierarchical structure of DGP well suited modeling the multi-fidelity regression, in which one is provided sparse observations with high precision and plenty of low fidelity observations.
no code implementations • 27 May 2019 • Chi-Ken Lu, Scott Cheng-Hsin Yang, Xiaoran Hao, Patrick Shafto
We propose interpretable DGP based on approximating DGP as a GP by calculating the exact moments, which additionally identify the heavy-tailed nature of some DGP distributions.
1 code implementation • 9 Mar 2018 • Chi-Ken Lu, Scott Cheng-Hsin Yang, Patrick Shafto
We propose a Standing Wave Decomposition (SWD) approximation to Gaussian Process regression (GP).