Search Results for author: Bernd Waschneck

Found 7 papers, 2 papers with code

Temporal Decisions: Leveraging Temporal Correlation for Efficient Decisions in Early Exit Neural Networks

no code implementations12 Mar 2024 Max Sponner, Lorenzo Servadei, Bernd Waschneck, Robert Wille, Akash Kumar

These findings highlight the importance of considering temporal correlation in sensor data to improve the termination decision.

Image Classification

Efficient Post-Training Augmentation for Adaptive Inference in Heterogeneous and Distributed IoT Environments

no code implementations12 Mar 2024 Max Sponner, Lorenzo Servadei, Bernd Waschneck, Robert Wille, Akash Kumar

For an ECG classification task, it was able to terminate all samples early, reducing the mean inference energy by 74. 9% and computations by 78. 3%.

ECG Classification Image Classification

Temporal Patience: Efficient Adaptive Deep Learning for Embedded Radar Data Processing

no code implementations11 Sep 2023 Max Sponner, Julius Ott, Lorenzo Servadei, Bernd Waschneck, Robert Wille, Akash Kumar

Radar sensors offer power-efficient solutions for always-on smart devices, but processing the data streams on resource-constrained embedded platforms remains challenging.

Convolutional Neural Networks Quantization with Attention

no code implementations30 Sep 2022 Binyi Wu, Bernd Waschneck, Christian Georg Mayr

It has been proven that, compared to using 32-bit floating-point numbers in the training phase, Deep Convolutional Neural Networks (DCNNs) can operate with low precision during inference, thereby saving memory space and power consumption.

Quantization

Combining Gradients and Probabilities for Heterogeneous Approximation of Neural Networks

1 code implementation15 Aug 2022 Elias Trommer, Bernd Waschneck, Akash Kumar

We further demonstrate that our error model can predict the parameters of an approximate multiplier in the context of the commonly used additive Gaussian noise (AGN) model with high accuracy.

Combinatorial Optimization

Compiler Toolchains for Deep Learning Workloads on Embedded Platforms

no code implementations8 Mar 2021 Max Sponner, Bernd Waschneck, Akash Kumar

As the usage of deep learning becomes increasingly popular in mobile and embedded solutions, it is necessary to convert the framework-specific network representations into executable code for these embedded platforms.

Cannot find the paper you are looking for? You can Submit a new open access paper.