1 code implementation • 9 Nov 2023 • Simon Wiedemann, Reinhard Heckel
At the same time, DeepDeWedge is simpler than this two-step approach, as it does denoising and missing wedge reconstruction simultaneously rather than sequentially.
no code implementations • 9 Jun 2022 • Simon Wiedemann, Daniel Hein, Steffen Udluft, Christian Mendl
We present a full implementation and simulation of a novel quantum reinforcement learning method.
no code implementations • 17 Dec 2020 • Simon Wiedemann, Suhas Shivapakash, Pablo Wiedemann, Daniel Becking, Wojciech Samek, Friedel Gerfers, Thomas Wiegand
With the growing demand for deploying deep learning models to the "edge", it is paramount to develop techniques that allow to execute state-of-the-art models within very tight and limited resource constraints.
no code implementations • 9 Apr 2020 • Simon Wiedemann, Temesgen Mehari, Kevin Kepp, Wojciech Samek
In this work we propose a method for reducing the computational cost of backprop, which we named dithered backprop.
2 code implementations • 2 Apr 2020 • Arturo Marban, Daniel Becking, Simon Wiedemann, Wojciech Samek
To address this problem, we propose Entropy-Constrained Trained Ternarization (EC2T), a general framework to create sparse and ternary neural networks which are efficient in terms of storage (e. g., at most two binary-masks and two full-precision values are required to save a weight matrix) and computation (e. g., MAC operations are reduced to a few accumulations plus two multiplications).
1 code implementation • 18 Dec 2019 • Seul-Ki Yeom, Philipp Seegerer, Sebastian Lapuschkin, Alexander Binder, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek
The success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage costs.
Explainable Artificial Intelligence (XAI) Model Compression +2
1 code implementation • 27 Jul 2019 • Simon Wiedemann, Heiner Kirchoffer, Stefan Matlage, Paul Haase, Arturo Marban, Talmaj Marinc, David Neumann, Tung Nguyen, Ahmed Osman, Detlev Marpe, Heiko Schwarz, Thomas Wiegand, Wojciech Samek
The field of video compression has developed some of the most sophisticated and efficient compression algorithms known in the literature, enabling very high compressibility for little loss of information.
no code implementations • 15 May 2019 • Simon Wiedemann, Heiner Kirchhoffer, Stefan Matlage, Paul Haase, Arturo Marban, Talmaj Marinc, David Neumann, Ahmed Osman, Detlev Marpe, Heiko Schwarz, Thomas Wiegand, Wojciech Samek
We present DeepCABAC, a novel context-adaptive binary arithmetic coder for compressing deep neural networks.
1 code implementation • 7 Mar 2019 • Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek
Federated Learning allows multiple parties to jointly train a deep learning model on their combined data, without any of the participants having to reveal their local data to a centralized server.
no code implementations • 18 Dec 2018 • Simon Wiedemann, Arturo Marban, Klaus-Robert Müller, Wojciech Samek
We propose a general framework for neural network compression that is motivated by the Minimum Description Length (MDL) principle.
no code implementations • NIPS Workshop CDNNRIA 2018 • Simon Wiedemann, Klaus-Robert Mueller, Wojciech Samek
However, most of these common matrix storage formats make strong statistical assumptions about the distribution of the elements in the matrix, and can therefore not efficiently represent the entire set of matrices that exhibit low entropy statistics (thus, the entire set of compressed neural network weight matrices).
no code implementations • 27 May 2018 • Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek
These new matrix formats have the novel property that their memory and algorithmic complexity are implicitly bounded by the entropy of the matrix, consequently implying that they are guaranteed to become more efficient as the entropy of the matrix is being reduced.
no code implementations • 22 May 2018 • Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek
A major issue in distributed training is the limited communication bandwidth between contributing nodes or prohibitive communication cost in general.