no code implementations • 22 Apr 2024 • Jonas Ricker, Dennis Assenmacher, Thorsten Holz, Asja Fischer, Erwin Quiring
Recent advances in the field of generative artificial intelligence (AI) have blurred the lines between authentic and machine-generated content, making it almost impossible for humans to distinguish between such media.
1 code implementation • 10 Dec 2023 • Joel Frank, Franziska Herbert, Jonas Ricker, Lea Schönherr, Thorsten Eisenhofer, Asja Fischer, Markus Dürmuth, Thorsten Holz
To further understand which factors influence people's ability to detect generated media, we include personal variables, chosen based on a literature review in the domains of deepfake and fake news research.
1 code implementation • 25 Mar 2023 • Thorsten Eisenhofer, Erwin Quiring, Jonas Möller, Doreen Riepel, Thorsten Holz, Konrad Rieck
In this paper, we show that this automation can be manipulated using adversarial learning.
2 code implementations • 23 Feb 2023 • Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, Mario Fritz
Large Language Models (LLMs) are increasingly being integrated into various applications.
no code implementations • 8 Feb 2023 • Hossein Hajipour, Keno Hassler, Thorsten Holz, Lea Schönherr, Mario Fritz
We evaluate the effectiveness of our approach by examining code language models in generating high-risk security weaknesses.
1 code implementation • 26 Oct 2022 • Jonas Ricker, Simon Damm, Thorsten Holz, Asja Fischer
However, relatively little attention has been paid to the detection of DM-generated images, which is critical to prevent adverse impacts on our society.
1 code implementation • 7 Apr 2021 • Joel Frank, Thorsten Holz
This work evaluates the reproducibility of the paper "CNN-generated images are surprisingly easy to spot... for now" by Wang et al. published at CVPR 2020.
1 code implementation • 10 Feb 2021 • Thorsten Eisenhofer, Lea Schönherr, Joel Frank, Lars Speckemeier, Dorothea Kolossa, Thorsten Holz
In this paper we propose a different perspective: We accept the presence of adversarial examples against ASR systems, but we require them to be perceivable by human listeners.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • pproximateinference AABI Symposium 2021 • Sina Däubener, Joel Frank, Thorsten Holz, Asja Fischer
In this paper we propose to efficiently attack Bayesian neural networks with adversarial examples calculated for a deterministic network with parameters given by the mean of the posterior distribution.
1 code implementation • 21 Oct 2020 • Hojjat Aghakhani, Lea Schönherr, Thorsten Eisenhofer, Dorothea Kolossa, Thorsten Holz, Christopher Kruegel, Giovanni Vigna
In a more realistic scenario, when the target audio waveform is played over the air in different rooms, VENOMAVE maintains a success rate of up to 73. 3%.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
1 code implementation • ICML 2020 • Joel Frank, Thorsten Eisenhofer, Lea Schönherr, Asja Fischer, Dorothea Kolossa, Thorsten Holz
Based on this analysis, we demonstrate how the frequency representation can be used to identify deep fake images in an automated way, surpassing state-of-the-art methods.
1 code implementation • 5 Sep 2019 • Christine Utz, Martin Degeling, Sascha Fahl, Florian Schaub, Thorsten Holz
We also show that the wide-spread practice of nudging has a large effect on the choices users make.
Human-Computer Interaction Computers and Society
no code implementations • 5 Aug 2019 • Lea Schönherr, Thorsten Eisenhofer, Steffen Zeiler, Thorsten Holz, Dorothea Kolossa
In this paper, we demonstrate the first algorithm that produces generic adversarial examples, which remain robust in an over-the-air attack that is not adapted to the specific environment.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 16 Aug 2018 • Lea Schönherr, Katharina Kohls, Steffen Zeiler, Thorsten Holz, Dorothea Kolossa
We use this backpropagation to learn the degrees of freedom for the adversarial perturbation of the input signal, i. e., we apply a psychoacoustic model and manipulate the acoustic signal below the thresholds of human perception.
Cryptography and Security Sound Audio and Speech Processing
1 code implementation • 15 Aug 2018 • Martin Degeling, Christine Utz, Christopher Lentzsch, Henry Hosseini, Florian Schaub, Thorsten Holz
We categorized all observed cookie consent notices and evaluated 16 common implementations with respect to their technical realization of cookie consent.
Computers and Society