Search Results for author: Leo Schwinn

Found 17 papers, 8 papers with code

Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding Space

no code implementations14 Feb 2024 Leo Schwinn, David Dobre, Sophie Xhonneux, Gauthier Gidel, Stephan Gunnemann

We address this research gap and propose the embedding space attack, which directly attacks the continuous embedding representation of input tokens.

Adversarial Robustness

Adversarial Attacks and Defenses in Large Language Models: Old and New Threats

1 code implementation30 Oct 2023 Leo Schwinn, David Dobre, Stephan Günnemann, Gauthier Gidel

Here, one major impediment has been the overestimation of the robustness of new defense approaches due to faulty defense evaluations.

Raising the Bar for Certified Adversarial Robustness with Diffusion Models

no code implementations17 May 2023 Thomas Altstidl, David Dobre, Björn Eskofier, Gauthier Gidel, Leo Schwinn

In this work, we demonstrate that a similar approach can substantially improve deterministic certified defenses.

Adversarial Robustness

FastAMI -- a Monte Carlo Approach to the Adjustment for Chance in Clustering Comparison Metrics

2 code implementations3 May 2023 Kai Klede, Leo Schwinn, Dario Zanca, Björn Eskofier

Clustering is at the very core of machine learning, and its applications proliferate with the increasing availability of data.

Clustering

Simulating Human Gaze with Neural Visual Attention

no code implementations22 Nov 2022 Leo Schwinn, Doina Precup, Bjoern Eskofier, Dario Zanca

Existing models of human visual attention are generally unable to incorporate direct task guidance and therefore cannot model an intent or goal when exploring a scene.

Just a Matter of Scale? Reevaluating Scale Equivariance in Convolutional Neural Networks

1 code implementation18 Nov 2022 Thomas Altstidl, An Nguyen, Leo Schwinn, Franz Köferl, Christopher Mutschler, Björn Eskofier, Dario Zanca

We also demonstrate that our family of models is able to generalize well towards larger scales and improve scale equivariance.

Behind the Machine's Gaze: Neural Networks with Biologically-inspired Constraints Exhibit Human-like Visual Attention

no code implementations19 Apr 2022 Leo Schwinn, Doina Precup, Björn Eskofier, Dario Zanca

By and large, existing computational models of visual attention tacitly assume perfect vision and full access to the stimulus and thereby deviate from foveated biological vision.

Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks

no code implementations21 May 2021 Leo Schwinn, René Raab, An Nguyen, Dario Zanca, Bjoern Eskofier

Progress in making neural networks more robust against adversarial attacks is mostly marginal, despite the great efforts of the research community.

CLIP: Cheap Lipschitz Training of Neural Networks

1 code implementation23 Mar 2021 Leon Bungert, René Raab, Tim Roith, Leo Schwinn, Daniel Tenbrinck

Despite the large success of deep neural networks (DNN) in recent years, most neural networks still lack mathematical guarantees in terms of stability.

Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis

1 code implementation24 Feb 2021 Leo Schwinn, An Nguyen, René Raab, Leon Bungert, Daniel Tenbrinck, Dario Zanca, Martin Burger, Bjoern Eskofier

The susceptibility of deep neural networks to untrustworthy predictions, including out-of-distribution (OOD) data and adversarial examples, still prevent their widespread use in safety-critical applications.

System Design for a Data-driven and Explainable Customer Sentiment Monitor

1 code implementation11 Jan 2021 An Nguyen, Stefan Foerstel, Thomas Kittler, Andrey Kurzyukov, Leo Schwinn, Dario Zanca, Tobias Hipp, Da Jun Sun, Michael Schrapp, Eva Rothgang, Bjoern Eskofier

The overall framework is currently deployed, learns and evaluates predictive models from terabytes of IoT and enterprise data to actively monitor the customer sentiment for a fleet of thousands of high-end medical devices.

Interpretable Machine Learning Management

Dynamically Sampled Nonlocal Gradients for Stronger Adversarial Attacks

no code implementations5 Nov 2020 Leo Schwinn, An Nguyen, René Raab, Dario Zanca, Bjoern Eskofier, Daniel Tenbrinck, Martin Burger

We empirically show that by incorporating this nonlocal gradient information, we are able to give a more accurate estimation of the global descent direction on noisy and non-convex loss surfaces.

Adversarial Attack

Conformance Checking for a Medical Training Process Using Petri net Simulation and Sequence Alignment

1 code implementation21 Oct 2020 An Nguyen, Wenyu Zhang, Leo Schwinn, Bjoern Eskofier

Process Mining has recently gained popularity in healthcare due to its potential to provide a transparent, objective and data-based view on processes.

Time Matters: Time-Aware LSTMs for Predictive Business Process Monitoring

1 code implementation2 Oct 2020 An Nguyen, Srijeet Chatterjee, Sven Weinzierl, Leo Schwinn, Martin Matzner, Bjoern Eskofier

To better model the time dependencies between events, we propose a new PBPM technique based on time-aware LSTM (T-LSTM) cells.

Towards Rapid and Robust Adversarial Training with One-Step Attacks

no code implementations24 Feb 2020 Leo Schwinn, René Raab, Björn Eskofier

Further, we add a learnable regularization step prior to the neural network, which we call Pixelwise Noise Injection Layer (PNIL).

Cannot find the paper you are looking for? You can Submit a new open access paper.