no code implementations • 2 Nov 2023 • Paul Geuchen, Thomas Heindl, Dominik Stöger, Felix Voigtlaender
Empirical studies have widely demonstrated that neural networks are highly sensitive to small, adversarial perturbations of the input.
no code implementations • 24 Mar 2023 • Mahdi Soltanolkotabi, Dominik Stöger, Changzhi Xie
We show that in this setting, factorized gradient descent enjoys two implicit properties: (1) coupling of the trajectory of gradient descent where the factors are coupled in various ways throughout the gradient update trajectory and (2) an algorithmic regularization property where the iterates show a propensity towards low-rank models despite the overparameterized nature of the factorized model.
no code implementations • 17 Mar 2023 • Julia Kostin, Felix Krahmer, Dominik Stöger
Reformulation of blind deconvolution as a low-rank recovery problem has led to multiple theoretical recovery guarantees in the past decade due to the success of the nuclear norm minimization heuristic.
no code implementations • 25 Apr 2022 • Kiryung Lee, Dominik Stöger
In this paper, we show that ALS with random initialization converges to the true solution with $\varepsilon$-accuracy in $O(\log n + \log (1/\varepsilon)) $ iterations using only a near-optimal amount of samples, where we assume the measurement matrices to be i. i. d.
no code implementations • NeurIPS 2021 • Dominik Stöger, Mahdi Soltanolkotabi
Recently there has been significant theoretical progress on understanding the convergence and generalization of gradient-based methods on nonconvex losses with overparameterized models.
no code implementations • 12 Apr 2021 • Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi
We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets.
no code implementations • ICLR 2021 • Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi
In this work, we present a comprehensive analysis of the importance of model over-parameterization in GANs both theoretically and empirically.
no code implementations • NeurIPS 2021 • Christian Kümmerle, Claudio Mayrink Verdun, Dominik Stöger
The recovery of sparse data is at the core of many applications in machine learning and signal processing.
no code implementations • 28 Feb 2019 • Felix Krahmer, Dominik Stöger
We find that for both these applications the dimension factors in the noise bounds are not an artifact of the proof, but the problems are intrinsically badly conditioned.