no code implementations • 23 Dec 2023 • Max Zimmer, Megi Andoni, Christoph Spiegel, Sebastian Pokutta
Neural Networks can be efficiently compressed through pruning, significantly reducing storage and computational demands while maintaining predictive performance.
1 code implementation • 29 Jun 2023 • Max Zimmer, Christoph Spiegel, Sebastian Pokutta
Model soups (Wortsman et al., 2022) enhance generalization and out-of-distribution (OOD) performance by averaging the parameters of multiple models into a single one, without increasing inference time.
1 code implementation • 24 May 2022 • Max Zimmer, Christoph Spiegel, Sebastian Pokutta
Many existing Neural Network pruning approaches rely on either retraining or inducing a strong bias in order to converge to a sparse solution throughout training.
1 code implementation • 1 Nov 2021 • Max Zimmer, Christoph Spiegel, Sebastian Pokutta
Many Neural Network Pruning approaches consist of several iterative training and pruning steps, seemingly losing a significant amount of their performance after pruning and then recovering it in the subsequent retraining phase.
1 code implementation • 14 Oct 2020 • Sebastian Pokutta, Christoph Spiegel, Max Zimmer
In particular, we show the general feasibility of training Neural Networks whose parameters are constrained by a convex feasible region using Frank-Wolfe algorithms and compare different stochastic variants.
1 code implementation • 29 Sep 2020 • Cyrille W. Combettes, Christoph Spiegel, Sebastian Pokutta
The complexity in large-scale optimization can lie in both handling the objective function and handling the constraint set.