Search Results for author: Lea Schönherr

Found 13 papers, 9 papers with code

Whispers in the Machine: Confidentiality in LLM-integrated Systems

1 code implementation10 Feb 2024 Jonathan Evertz, Merlin Chlosta, Lea Schönherr, Thorsten Eisenhofer

Specifically, malicious tools can exploit vulnerabilities in the LLM itself to manipulate the model and compromise the data of other services, raising the question of how private data can be protected in the context of LLM integrations.

A Representative Study on Human Detection of Artificially Generated Media Across Countries

1 code implementation10 Dec 2023 Joel Frank, Franziska Herbert, Jonas Ricker, Lea Schönherr, Thorsten Eisenhofer, Asja Fischer, Markus Dürmuth, Thorsten Holz

To further understand which factors influence people's ability to detect generated media, we include personal variables, chosen based on a literature review in the domains of deepfake and fake news research.

Face Swapping Human Detection

LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

2 code implementations29 Sep 2023 Sahar Abdelnabi, Amr Gomaa, Sarath Sivaprasad, Lea Schönherr, Mario Fritz

There is a growing interest in using Large Language Models (LLMs) as agents to tackle real-world tasks that may require assessing complex situations.

Decision Making

On the Limitations of Model Stealing with Uncertainty Quantification Models

no code implementations9 May 2023 David Pape, Sina Däubener, Thorsten Eisenhofer, Antonio Emanuele Cinà, Lea Schönherr

We realize that during training, the models tend to have similar predictions, indicating that the network diversity we wanted to leverage using uncertainty quantification models is not (high) enough for improvements on the model stealing task.

Uncertainty Quantification

WaveFake: A Data Set to Facilitate Audio Deepfake Detection

2 code implementations4 Nov 2021 Joel Frank, Lea Schönherr

Deep generative modeling has the potential to cause significant harm to society.

DeepFake Detection Face Swapping

Dompteur: Taming Audio Adversarial Examples

1 code implementation10 Feb 2021 Thorsten Eisenhofer, Lea Schönherr, Joel Frank, Lars Speckemeier, Dorothea Kolossa, Thorsten Holz

In this paper we propose a different perspective: We accept the presence of adversarial examples against ASR systems, but we require them to be perceivable by human listeners.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

VenoMave: Targeted Poisoning Against Speech Recognition

1 code implementation21 Oct 2020 Hojjat Aghakhani, Lea Schönherr, Thorsten Eisenhofer, Dorothea Kolossa, Thorsten Holz, Christopher Kruegel, Giovanni Vigna

In a more realistic scenario, when the target audio waveform is played over the air in different rooms, VENOMAVE maintains a success rate of up to 73. 3%.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification

1 code implementation24 May 2020 Sina Däubener, Lea Schönherr, Asja Fischer, Dorothea Kolossa

The neural networks for uncertainty quantification simultaneously diminish the vulnerability to the attack, which is reflected in a lower recognition accuracy of the malicious target text in comparison to a standard hybrid ASR system.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Leveraging Frequency Analysis for Deep Fake Image Recognition

1 code implementation ICML 2020 Joel Frank, Thorsten Eisenhofer, Lea Schönherr, Asja Fischer, Dorothea Kolossa, Thorsten Holz

Based on this analysis, we demonstrate how the frequency representation can be used to identify deep fake images in an automated way, surpassing state-of-the-art methods.

Image Forensics

Imperio: Robust Over-the-Air Adversarial Examples for Automatic Speech Recognition Systems

no code implementations5 Aug 2019 Lea Schönherr, Thorsten Eisenhofer, Steffen Zeiler, Thorsten Holz, Dorothea Kolossa

In this paper, we demonstrate the first algorithm that produces generic adversarial examples, which remain robust in an over-the-air attack that is not adapted to the specific environment.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding

no code implementations16 Aug 2018 Lea Schönherr, Katharina Kohls, Steffen Zeiler, Thorsten Holz, Dorothea Kolossa

We use this backpropagation to learn the degrees of freedom for the adversarial perturbation of the input signal, i. e., we apply a psychoacoustic model and manipulate the acoustic signal below the thresholds of human perception.

Cryptography and Security Sound Audio and Speech Processing

Cannot find the paper you are looking for? You can Submit a new open access paper.