1 code implementation • 10 Dec 2023 • Joel Frank, Franziska Herbert, Jonas Ricker, Lea Schönherr, Thorsten Eisenhofer, Asja Fischer, Markus Dürmuth, Thorsten Holz
To further understand which factors influence people's ability to detect generated media, we include personal variables, chosen based on a literature review in the domains of deepfake and fake news research.
2 code implementations • 4 Nov 2021 • Joel Frank, Lea Schönherr
Deep generative modeling has the potential to cause significant harm to society.
1 code implementation • 7 Apr 2021 • Joel Frank, Thorsten Holz
This work evaluates the reproducibility of the paper "CNN-generated images are surprisingly easy to spot... for now" by Wang et al. published at CVPR 2020.
1 code implementation • 10 Feb 2021 • Thorsten Eisenhofer, Lea Schönherr, Joel Frank, Lars Speckemeier, Dorothea Kolossa, Thorsten Holz
In this paper we propose a different perspective: We accept the presence of adversarial examples against ASR systems, but we require them to be perceivable by human listeners.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • pproximateinference AABI Symposium 2021 • Sina Däubener, Joel Frank, Thorsten Holz, Asja Fischer
In this paper we propose to efficiently attack Bayesian neural networks with adversarial examples calculated for a deterministic network with parameters given by the mean of the posterior distribution.
1 code implementation • ICML 2020 • Joel Frank, Thorsten Eisenhofer, Lea Schönherr, Asja Fischer, Dorothea Kolossa, Thorsten Holz
Based on this analysis, we demonstrate how the frequency representation can be used to identify deep fake images in an automated way, surpassing state-of-the-art methods.