Search Results for author: Bernhard A. Moser

Found 14 papers, 6 papers with code

On Leaky-Integrate-and Fire as Spike-Train-Quantization Operator on Dirac-Superimposed Continuous-Time Signals

no code implementations10 Feb 2024 Bernhard A. Moser, Michael Lunglmayr

Leaky-integrate-and-fire (LIF) is studied as a non-linear operator that maps an integrable signal $f$ to a sequence $\eta_f$ of discrete events, the spikes.

Quantization

SNN Architecture for Differential Time Encoding Using Decoupled Processing Time

no code implementations24 Nov 2023 Daniel Windhager, Bernhard A. Moser, Michael Lunglmayr

We present synthesis and performance results showing that this architecture can be implemented for networks of more than 1000 neurons with high clock speeds on a State-of-the-Art FPGA.

Quantization

Quantization in Spiking Neural Networks

1 code implementation13 May 2023 Bernhard A. Moser, Michael Lunglmayr

In spiking neural networks (SNN), at each node, an incoming sequence of weighted Dirac pulses is converted into an output sequence of weighted Dirac pulses by a leaky-integrate-and-fire (LIF) neuron model based on spike aggregation and thresholding.

Quantization

Spiking Neural Networks in the Alexiewicz Topology: A New Perspective on Analysis and Error Bounds

1 code implementation9 May 2023 Bernhard A. Moser, Michael Lunglmayr

A central question is the adequate structure for a space of spike trains and its implication for the design of error measurements of SNNs including time delay, threshold deviations, and the design of the reinitialization mode of the leaky-integrate-and-fire (LIF) neuron model.

Quantization

Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation

1 code implementation2 May 2023 Marius-Constantin Dinu, Markus Holzleitner, Maximilian Beck, Hoan Duc Nguyen, Andrea Huber, Hamid Eghbal-zadeh, Bernhard A. Moser, Sergei Pereverzyev, Sepp Hochreiter, Werner Zellinger

Our method outperforms deep embedded validation (DEV) and importance weighted validation (IWV) on all datasets, setting a new state-of-the-art performance for solving parameter choice issues in unsupervised domain adaptation with theoretical error guarantees.

Unsupervised Domain Adaptation

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning

no code implementations4 May 2022 Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli

In this survey, we provide a comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing more than 100 papers published in the field in the last 15 years.

BIG-bench Machine Learning Data Poisoning

The balancing principle for parameter choice in distance-regularized domain adaptation

1 code implementation NeurIPS 2021 Werner Zellinger, Natalia Shepeleva, Marius-Constantin Dinu, Hamid Eghbal-zadeh, Hoan Nguyen, Bernhard Nessler, Sergei Pereverzyev, Bernhard A. Moser

Our approach starts with the observation that the widely-used method of minimizing the source error, penalized by a distance measure between source and target feature representations, shares characteristics with regularized ill-posed inverse problems.

Unsupervised Domain Adaptation

Information Theoretic Evaluation of Privacy-Leakage, Interpretability, and Transferability for Trustworthy AI

no code implementations6 Jun 2021 Mohit Kumar, Bernhard A. Moser, Lukas Fischer, Bernhard Freudenthaler

A variational membership-mapping Bayesian model is used for the analytical approximations of the defined information theoretic measures for privacy-leakage, interpretability, and transferability.

Heart Rate Variability Privacy Preserving

On Data Augmentation and Adversarial Risk: An Empirical Analysis

no code implementations6 Jul 2020 Hamid Eghbal-zadeh, Khaled Koutini, Paul Primus, Verena Haunschmid, Michal Lewandowski, Werner Zellinger, Bernhard A. Moser, Gerhard Widmer

Data augmentation techniques have become standard practice in deep learning, as it has been shown to greatly improve the generalisation abilities of models.

Adversarial Attack Data Augmentation

On generalization in moment-based domain adaptation

no code implementations19 Feb 2020 Werner Zellinger, Bernhard A. Moser, Susanne Saminger-Platz

Domain adaptation algorithms are designed to minimize the misclassification risk of a discriminative model for a target domain with little training data by adapting a model from a source domain with a large amount of training data.

Domain Adaptation Generalization Bounds

Deep SNP: An End-to-end Deep Neural Network with Attention-based Localization for Break-point Detection in SNP Array Genomic data

1 code implementation22 Jun 2018 Hamid Eghbal-zadeh, Lukas Fischer, Niko Popitsch, Florian Kromp, Sabine Taschner-Mandl, Khaled Koutini, Teresa Gerber, Eva Bozsaky, Peter F. Ambros, Inge M. Ambros, Gerhard Widmer, Bernhard A. Moser

We show, that Deep SNP is capable of successfully predicting the presence or absence of a breakpoint in large genomic windows and outperforms state-of-the-art neural network models.

Cannot find the paper you are looking for? You can Submit a new open access paper.