Search Results for author: Kathrin Grosse

Found 20 papers, 2 papers with code

Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness

no code implementations21 Feb 2024 David Fernández Llorca, Ronan Hamon, Henrik Junklewitz, Kathrin Grosse, Lars Kunze, Patrick Seiniger, Robert Swaim, Nick Reed, Alexandre Alahi, Emilia Gómez, Ignacio Sánchez, Akos Kriston

This study explores the complexities of integrating Artificial Intelligence (AI) into Autonomous Vehicles (AVs), examining the challenges introduced by AI components and the impact on testing procedures, focusing on some of the essential requirements for trustworthy AI.

Autonomous Vehicles Decision Making +1

Manipulating Trajectory Prediction with Backdoors

no code implementations21 Dec 2023 Kaouther Messaoud, Kathrin Grosse, Mickael Chen, Matthieu Cord, Patrick Pérez, Alexandre Alahi

In this paper, we focus on backdoors - a security threat acknowledged in other fields but so far overlooked for trajectory prediction.

Autonomous Vehicles Trajectory Prediction

Towards more Practical Threat Models in Artificial Intelligence Security

no code implementations16 Nov 2023 Kathrin Grosse, Lukas Bieringer, Tarek Richard Besold, Alexandre Alahi

Recent works have identified a gap between research and practice in artificial intelligence security: threats studied in academia do not always reflect the practical use and security risks of AI.

Machine Learning Security in Industry: A Quantitative Survey

no code implementations11 Jul 2022 Kathrin Grosse, Lukas Bieringer, Tarek Richard Besold, Battista Biggio, Katharina Krombholz

Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild.

BIG-bench Machine Learning Decision Making

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning

no code implementations4 May 2022 Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli

In this survey, we provide a comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing more than 100 papers published in the field in the last 15 years.

BIG-bench Machine Learning Data Poisoning

Machine Learning Security against Data Poisoning: Are We There Yet?

1 code implementation12 Apr 2022 Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

The recent success of machine learning (ML) has been fueled by the increasing availability of computing power and large amounts of data in many different applications.

BIG-bench Machine Learning Data Poisoning

Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions

1 code implementation14 Jun 2021 Antonio Emanuele Cinà, Kathrin Grosse, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Backdoor attacks inject poisoning samples during training, with the goal of forcing a machine learning model to output an attacker-chosen class when presented a specific trigger at test time.

BIG-bench Machine Learning Incremental Learning

Adversarial Examples and Metrics

no code implementations14 Jul 2020 Nico Döttling, Kathrin Grosse, Michael Backes, Ian Molloy

In this work we study the limitations of robust classification if the target metric is uncertain.

Classification General Classification +1

How many winning tickets are there in one DNN?

no code implementations12 Jun 2020 Kathrin Grosse, Michael Backes

The recent lottery ticket hypothesis proposes that there is one sub-network that matches the accuracy of the original network when trained in isolation.

Backdoor Smoothing: Demystifying Backdoor Attacks on Deep Neural Networks

no code implementations11 Jun 2020 Kathrin Grosse, Taesung Lee, Battista Biggio, Youngja Park, Michael Backes, Ian Molloy

Backdoor attacks mislead machine-learning models to output an attacker-specified class when presented a specific trigger at test time.

Adversarial Vulnerability Bounds for Gaussian Process Classification

no code implementations19 Sep 2019 Michael Thomas Smith, Kathrin Grosse, Michael Backes, Mauricio A. Alvarez

To protect against this we devise an adversarial bound (AB) for a Gaussian process classifier, that holds for the entire input domain, bounding the potential for any future adversarial method to cause such misclassification.

Classification General Classification

On the security relevance of weights in deep learning

no code implementations8 Feb 2019 Kathrin Grosse, Thomas A. Trost, Marius Mosbach, Michael Backes, Dietrich Klakow

Recently, a weight-based attack on stochastic gradient descent inducing overfitting has been proposed.

The Limitations of Model Uncertainty in Adversarial Settings

no code implementations6 Dec 2018 Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes

Machine learning models are vulnerable to adversarial examples: minor perturbations to input samples intended to deliberately cause misclassification.

BIG-bench Machine Learning Gaussian Processes

On the (Statistical) Detection of Adversarial Examples

no code implementations21 Feb 2017 Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, Patrick McDaniel

Specifically, we augment our ML model with an additional output, in which the model is trained to classify all adversarial inputs.

Malware Classification Network Intrusion Detection

Adversarial Perturbations Against Deep Neural Networks for Malware Classification

no code implementations14 Jun 2016 Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, Patrick McDaniel

Deep neural networks, like many other machine learning models, have recently been shown to lack robustness against adversarially crafted inputs.

BIG-bench Machine Learning Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.