no code implementations • 14 Nov 2023 • Ana Răduţoiu, Jan-Philipp Schulze, Philip Sperl, Konstantin Böttinger
Neural networks build the foundation of several intelligent systems, which, however, are known to be easily fooled by adversarial examples.
no code implementations • 5 Oct 2023 • Armin Ettenhofer, Jan-Philipp Schulze, Karla Pizzi
Audio adversarial examples are audio files that have been manipulated to fool an automatic speech recognition (ASR) system, while still sounding benign to a human listener.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
1 code implementation • 21 Jun 2022 • Jan-Philipp Schulze, Philip Sperl, Ana Răduţoiu, Carla Sagebiel, Konstantin Böttinger
Neural networks follow a gradient-based learning scheme, adapting their mapping parameters by back-propagating the output loss.
Semi-supervised Anomaly Detection Supervised Anomaly Detection
1 code implementation • 3 Mar 2020 • Philip Sperl, Jan-Philipp Schulze, Konstantin Böttinger
Based on the activation values in the target network, the alarm network decides if the given sample is normal.
Semi-supervised Anomaly Detection Supervised Anomaly Detection