no code implementations • ICML 2020 • Dmytro Perekrestenko, Stephan Müller, Helmut Bölcskei
We present an explicit deep network construction that transforms uniformly distributed one-dimensional noise into an arbitrarily close approximation of any two-dimensional target distribution of finite differential entropy and Lipschitz-continuous pdf.
no code implementations • 26 Jul 2021 • Dmytro Perekrestenko, Léandre Eberhard, Helmut Bölcskei
We show that every $d$-dimensional probability distribution of bounded support can be generated through deep ReLU networks out of a $1$-dimensional uniform input distribution.
no code implementations • 30 Jun 2020 • Dmytro Perekrestenko, Stephan Müller, Helmut Bölcskei
We present an explicit deep neural network construction that transforms uniformly distributed one-dimensional noise into an arbitrarily close approximation of any two-dimensional Lipschitz-continuous target distribution.
no code implementations • 8 Jan 2019 • Dennis Elbrächter, Dmytro Perekrestenko, Philipp Grohs, Helmut Bölcskei
This paper develops fundamental limits of deep neural network learning by characterizing what is possible if no constraints are imposed on the learning algorithm and on the amount of training data.
no code implementations • ICLR 2019 • Dmytro Perekrestenko, Philipp Grohs, Dennis Elbrächter, Helmut Bölcskei
We show that finite-width deep ReLU neural networks yield rate-distortion optimal approximation (B\"olcskei et al., 2018) of polynomials, windowed sinusoidal functions, one-dimensional oscillatory textures, and the Weierstrass function, a fractal function which is continuous but nowhere differentiable.
1 code implementation • 17 Oct 2017 • Martin Zihlmann, Dmytro Perekrestenko, Michael Tschannen
We propose two deep neural network architectures for classification of arbitrary-length electrocardiogram (ECG) recordings and evaluate them on the atrial fibrillation (AF) classification data set provided by the PhysioNet/CinC Challenge 2017.
no code implementations • 7 Mar 2017 • Dmytro Perekrestenko, Volkan Cevher, Martin Jaggi
Coordinate descent methods employ random partial updates of decision variables in order to solve huge-scale convex optimization problems.