Search Results for author: Jens Mehnert

Found 7 papers, 0 papers with code

Instant Complexity Reduction in CNNs using Locality-Sensitive Hashing

no code implementations29 Sep 2023 Lukas Meiner, Jens Mehnert, Alexandru Paul Condurache

In particular, we are able to instantly drop 46. 72% of FLOPs while only losing 1. 25% accuracy by just swapping the convolution modules in a ResNet34 on CIFAR-10 for our HASTE module.

Spectral Batch Normalization: Normalization in the Frequency Domain

no code implementations29 Jun 2023 Rinor Cakaj, Jens Mehnert, Bin Yang

However, we show experimentally that, despite the approximate additive penalty of BN, feature maps in deep neural networks (DNNs) tend to explode at the beginning of the network and that feature maps of DNNs contain large values during the whole training.

Weight Compander: A Simple Weight Reparameterization for Regularization

no code implementations29 Jun 2023 Rinor Cakaj, Jens Mehnert, Bin Yang

Large weights in deep neural networks are a sign of a more complex network that is overfitted to the training data.

Decision Making

Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey

no code implementations17 May 2022 Paul Wimmer, Jens Mehnert, Alexandru Paul Condurache

By freezing weights, the number of trainable parameters is shrunken which reduces gradient computations and the dimensionality of the model's optimization space.

Model Compression

FreezeNet: Full Performance by Reduced Storage Costs

no code implementations28 Nov 2020 Paul Wimmer, Jens Mehnert, Alexandru Condurache

On the classification tasks MNIST and CIFAR-10/100 we outperform SNIP, in this setting the best reported one-shot pruning method, applied before training.

Cannot find the paper you are looking for? You can Submit a new open access paper.