1 code implementation • 28 Oct 2020 • Simon Damm, Dennis Forster, Dmytro Velychko, Zhenwen Dai, Asja Fischer, Jörg Lücke
Here we show that for standard (i. e., Gaussian) VAEs the ELBO converges to a value given by the sum of three entropies: the (negative) entropy of the prior distribution, the expected (negative) entropy of the observable distribution, and the average entropy of the variational distributions (the latter is already part of the ELBO).
1 code implementation • 1 Oct 2018 • Florian Hirschberger, Dennis Forster, Jörg Lücke
The aim of the project (which resulted in this arXiv version and the later TPAMI paper) is the exploration of the current efficiency and large-scale limits in fitting a parametric model for clustering to data distributions.
no code implementations • 9 Nov 2017 • Dennis Forster, Jörg Lücke
The basic idea is to use a partial variational E-step which reduces the linear complexity of $\mathcal{O}(NCD)$ required for a full E-step to a sublinear complexity.
no code implementations • 16 Apr 2017 • Jörg Lücke, Dennis Forster
We show that $k$-means (Lloyd's algorithm) is obtained as a special case when truncated variational EM approximations are applied to Gaussian Mixture Models (GMM) with isotropic Gaussians.
no code implementations • 7 Feb 2017 • Dennis Forster, Jörg Lücke
Inference and learning for probabilistic generative networks is often very challenging and typically prevents scalability to as large networks as used for deep discriminative approaches.
no code implementations • 28 Jun 2015 • Dennis Forster, Abdul-Saboor Sheikh, Jörg Lücke
This results in powerful though very complex models that are hard to train and that demand additional labels for optimal parameter tuning, which are often not given when labeled data is very sparse.