no code implementations • 21 Feb 2024 • David Fernández Llorca, Ronan Hamon, Henrik Junklewitz, Kathrin Grosse, Lars Kunze, Patrick Seiniger, Robert Swaim, Nick Reed, Alexandre Alahi, Emilia Gómez, Ignacio Sánchez, Akos Kriston
This study explores the complexities of integrating Artificial Intelligence (AI) into Autonomous Vehicles (AVs), examining the challenges introduced by AI components and the impact on testing procedures, focusing on some of the essential requirements for trustworthy AI.
no code implementations • 21 Dec 2023 • Kaouther Messaoud, Kathrin Grosse, Mickael Chen, Matthieu Cord, Patrick Pérez, Alexandre Alahi
In this paper, we focus on backdoors - a security threat acknowledged in other fields but so far overlooked for trajectory prediction.
no code implementations • 16 Nov 2023 • Kathrin Grosse, Lukas Bieringer, Tarek Richard Besold, Alexandre Alahi
Recent works have identified a gap between research and practice in artificial intelligence security: threats studied in academia do not always reflect the practical use and security risks of AI.
no code implementations • 12 Dec 2022 • Ambra Demontis, Maura Pintor, Luca Demetrio, Kathrin Grosse, Hsiao-Ying Lin, Chengfang Fang, Battista Biggio, Fabio Roli
Reinforcement learning allows machines to learn from their own experience.
no code implementations • 11 Jul 2022 • Kathrin Grosse, Lukas Bieringer, Tarek Richard Besold, Battista Biggio, Katharina Krombholz
Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild.
no code implementations • 4 May 2022 • Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli
In this survey, we provide a comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing more than 100 papers published in the field in the last 15 years.
1 code implementation • 12 Apr 2022 • Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo
The recent success of machine learning (ML) has been fueled by the increasing availability of computing power and large amounts of data in many different applications.
1 code implementation • 14 Jun 2021 • Antonio Emanuele Cinà, Kathrin Grosse, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo
Backdoor attacks inject poisoning samples during training, with the goal of forcing a machine learning model to output an attacker-chosen class when presented a specific trigger at test time.
no code implementations • 8 May 2021 • Lukas Bieringer, Kathrin Grosse, Michael Backes, Battista Biggio, Katharina Krombholz
Our study reveals two \facets of practitioners' mental models of machine learning security.
no code implementations • 14 Jul 2020 • Nico Döttling, Kathrin Grosse, Michael Backes, Ian Molloy
In this work we study the limitations of robust classification if the target metric is uncertain.
no code implementations • 12 Jun 2020 • Kathrin Grosse, Michael Backes
The recent lottery ticket hypothesis proposes that there is one sub-network that matches the accuracy of the original network when trained in isolation.
no code implementations • 11 Jun 2020 • Kathrin Grosse, Taesung Lee, Battista Biggio, Youngja Park, Michael Backes, Ian Molloy
Backdoor attacks mislead machine-learning models to output an attacker-specified class when presented a specific trigger at test time.
no code implementations • 19 Sep 2019 • Michael Thomas Smith, Kathrin Grosse, Michael Backes, Mauricio A. Alvarez
To protect against this we devise an adversarial bound (AB) for a Gaussian process classifier, that holds for the entire input domain, bounding the potential for any future adversarial method to cause such misclassification.
no code implementations • 8 Feb 2019 • Kathrin Grosse, Thomas A. Trost, Marius Mosbach, Michael Backes, Dietrich Klakow
Recently, a weight-based attack on stochastic gradient descent inducing overfitting has been proposed.
no code implementations • 6 Dec 2018 • Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes
Machine learning models are vulnerable to adversarial examples: minor perturbations to input samples intended to deliberately cause misclassification.
no code implementations • 1 Aug 2018 • Lucjan Hanzlik, Yang Zhang, Kathrin Grosse, Ahmed Salem, Max Augustin, Michael Backes, Mario Fritz
In this paper, we propose MLCapsule, a guarded offline deployment of machine learning as a service.
no code implementations • 6 Jun 2018 • Kathrin Grosse, Michael T. Smith, Michael Backes
For example, we are able to secure GPC against empirical membership inference by proper configuration.
no code implementations • 17 Nov 2017 • Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes
In this paper, we leverage Gaussian Processes to investigate adversarial examples in the framework of Bayesian inference.
no code implementations • 21 Feb 2017 • Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, Patrick McDaniel
Specifically, we augment our ML model with an additional output, in which the model is trained to classify all adversarial inputs.
no code implementations • 14 Jun 2016 • Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, Patrick McDaniel
Deep neural networks, like many other machine learning models, have recently been shown to lack robustness against adversarially crafted inputs.