no code implementations • 19 Feb 2024 • Daniel Kowatsch, Nicolas M. Müller, Kilian Tscharke, Philip Sperl, Konstantin Bötinger
For classification, the problem of class imbalance is well known and has been extensively studied.
no code implementations • 9 Feb 2024 • Nicolas M. Müller, Piotr Kawa, Shen Hu, Matthias Neu, Jennifer Williams, Philip Sperl, Konstantin Böttinger
We argue that this binary distinction is oversimplified.
no code implementations • 30 Oct 2023 • Nicolas M. Müller, Maximilian Burgert, Pascal Debus, Jennifer Williams, Philip Sperl, Konstantin Böttinger
Machine-learning (ML) shortcuts or spurious correlations are artifacts in datasets that lead to very good training and test performance but severely limit the model's generalization capability.
no code implementations • 22 Aug 2023 • Nicolas M. Müller, Philip Sperl, Konstantin Böttinger
Current anti-spoofing and audio deepfake detection systems use either magnitude spectrogram-based features (such as CQT or Melspectrograms) or raw audio processed through convolution or sinc-layers.
1 code implementation • 8 Feb 2023 • Nicolas M. Müller, Simon Roschmann, Shahbaz Khan, Philip Sperl, Konstantin Böttinger
For real-world applications of machine learning (ML), it is essential that models make predictions based on well-generalizing features rather than spurious correlations in the data.
no code implementations • 24 Nov 2022 • Nicolas M. Müller, Jochen Jacobs, Jennifer Williams, Konstantin Böttinger
This is often due to the existence of machine learning shortcuts - features in the data that are predictive but unrelated to the problem at hand.
no code implementations • 30 Mar 2022 • Nicolas M. Müller, Pavel Czempin, Franziska Dieckmann, Adam Froghyar, Konstantin Böttinger
Current text-to-speech algorithms produce realistic fakes of human voices, making deepfake detection a much-needed area of research.
no code implementations • 28 Mar 2022 • Nicolas M. Müller, Franziska Dieckmann, Jennifer Williams
This is despite the fact that attribution (who created which fake?)
no code implementations • 20 Jul 2021 • Nicolas M. Müller, Karla Pizzi, Jennifer Williams
The recent emergence of deepfakes has brought manipulated and generated content to the forefront of machine learning research.
no code implementations • 14 Apr 2021 • Nicolas M. Müller, Simon Roschmann, Konstantin Böttinger
Since many applications rely on untrusted training data, an attacker can easily craft malicious samples and inject them into the training dataset to degrade the performance of machine learning models.
no code implementations • 26 Jan 2021 • Nicolas M. Müller, Konstantin Böttinger
In this paper, we share an intriguing observation: Namely, that the combination of these techniques is particularly susceptible to a new kind of data poisoning attack: By adding small adversarial noise on the input, it is possible to create a collision in the output space of the transfer learner.
2 code implementations • 14 Oct 2020 • Tom Dörr, Karla Markert, Nicolas M. Müller, Konstantin Böttinger
We devise an approach to mitigate this flaw and find that our method improves generation of adversarial examples with varying offsets.