no code implementations • 16 Apr 2024 • Rui Yan, Gabriel Santos, Gethin Norman, David Parker, Marta Kwiatkowska
For the partially-informed agent, we propose a continual resolving approach which uses lower bounds, pre-computed offline with heuristic search value iteration (HSVI), instead of opponent counterfactual values.
no code implementations • 20 Mar 2024 • Jon Vadillo, Roberto Santana, Jose A. Lozano, Marta Kwiatkowska
The lack of transparency of Deep Neural Networks continues to be a limitation that severely undermines their reliability and usage in high-stakes applications.
no code implementations • 14 Mar 2024 • Tomáš Brázdil, Krishnendu Chatterjee, Martin Chmelik, Vojtěch Forejt, Jan Křetínský, Marta Kwiatkowska, Tobias Meggendorfer, David Parker, Mateusz Ujma
The presented framework focuses on probabilistic reachability, which is a core problem in verification, and is instantiated in two distinct scenarios.
no code implementations • 28 Nov 2023 • Daqian Shao, Lukas Fesser, Marta Kwiatkowska
Robustness certification, which aims to formally certify the predictions of neural networks against adversarial inputs, has become an integral part of important tool for safety-critical applications.
no code implementations • 17 Oct 2023 • Rui Yan, Gabriel Santos, Gethin Norman, David Parker, Marta Kwiatkowska
Stochastic games are a well established model for multi-agent sequential decision making under uncertainty.
1 code implementation • 3 Oct 2023 • Matthew Wicker, Luca Laurenti, Andrea Patane, Nicola Paoletti, Alessandro Abate, Marta Kwiatkowska
Such computed lower bounds provide safety certification for the given policy and BNN model.
no code implementations • 20 Sep 2023 • Marta Kwiatkowska, Xiyue Zhang
Artificial intelligence (AI) has been advancing at a fast pace and it is now poised for deployment in a wide range of applications, such as autonomous systems, medical diagnosis and natural language processing.
no code implementations • 30 Jun 2023 • Rui Yan, Gabriel Santos, Gethin Norman, David Parker, Marta Kwiatkowska
This requires functions over continuous-state beliefs, for which we propose a novel piecewise linear and convex representation (P-PWLC) in terms of polyhedra covering the continuous-state space and value vectors, and extend Bellman backups to this representation.
1 code implementation • 23 Jun 2023 • Matthew Wicker, Andrea Patane, Luca Laurenti, Marta Kwiatkowska
We study the problem of certifying the robustness of Bayesian neural networks (BNNs) to adversarial input perturbations.
1 code implementation • 5 May 2023 • Xiyue Zhang, Benjie Wang, Marta Kwiatkowska
Neural network verification mainly focuses on local robustness properties, which can be checked by bounding the image (set of outputs) of a given input set.
1 code implementation • 2 May 2023 • Daqian Shao, Marta Kwiatkowska
Linear Temporal Logic (LTL) is widely used to specify high-level objectives for system policies, and it is highly desirable for autonomous systems to learn the optimal policy with respect to such specifications.
1 code implementation • 17 Apr 2023 • Benjie Wang, Marta Kwiatkowska
Probabilistic circuits (PCs) are a class of tractable probabilistic models, which admit efficient inference routines depending on their structural properties.
no code implementations • 28 Nov 2022 • Artem Velikzhanin, Benjie Wang, Marta Kwiatkowska
After describing the search methodology, the selected research papers are briefly reviewed, with the view to identify publicly available models and datasets that are well suited to analysis using the causal interventional analysis software tool developed in Wang B, Lyle C, Kwiatkowska M (2021).
1 code implementation • 31 Oct 2022 • Emanuele La Malfa, Matthew Wicker, Marta Kwiatkowska
In this paper, focusing on the ability of language models to represent syntax, we propose a framework to assess the consistency and robustness of linguistic representations.
no code implementations • 12 Oct 2022 • Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell
We finish by giving robust learning algorithms for halfspaces on $\{0, 1\}^n$ and then obtaining robustness guarantees for halfspaces in $\mathbb{R}^n$ against precision-bounded adversaries.
1 code implementation • 8 Oct 2022 • Aleksandar Petrov, Marta Kwiatkowska
When used in adversarial training, they improve most unsupervised robustness measures, including certified robustness.
no code implementations • 5 Jun 2022 • Clare Lyle, Mark Rowland, Will Dabney, Marta Kwiatkowska, Yarin Gal
Solving a reinforcement learning (RL) problem poses two competing challenges: fitting a potentially discontinuous value function, and generalizing well to new observations.
no code implementations • 12 May 2022 • Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell
A fundamental problem in adversarial machine learning is to quantify how much training data is needed in the presence of evasion attacks.
1 code implementation • 11 May 2022 • Elias Benussi, Andrea Patane, Matthew Wicker, Luca Laurenti, Marta Kwiatkowska
We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (NNs).
1 code implementation • 11 May 2022 • Hjalmar Wijk, Benjie Wang, Marta Kwiatkowska
In many domains, worst-case guarantees on the performance (e. g., prediction accuracy) of a decision function subject to distributional shifts and uncertainty about the environment are crucial.
no code implementations • 29 Apr 2022 • Benjie Wang, Matthew Wicker, Marta Kwiatkowska
Bayesian structure learning allows one to capture uncertainty over the causal directed acyclic graph (DAG) responsible for generating given data.
no code implementations • 13 Feb 2022 • Rui Yan, Gabriel Santos, Gethin Norman, David Parker, Marta Kwiatkowska
Second, we introduce two novel representations for the value functions and strategies, constant-piecewise-linear (CON-PWL) and constant-piecewise-constant (CON-PWC) respectively, and propose Minimax-action-free PI by extending a recent PI method based on alternating player choices for finite state spaces to Borel state spaces, which does not require normal-form games to be solved.
1 code implementation • 13 Dec 2021 • Emanuele La Malfa, Marta Kwiatkowska
There is growing evidence that the classical notion of adversarial robustness originally introduced for images has been adopted as a de facto standard by a large part of the NLP research community.
no code implementations • 25 Aug 2021 • Tobias Lorenz, Marta Kwiatkowska, Mario Fritz
While this is a key concept towards safe and secure AI, we show for the first time that this approach comes with its own security risks, as such fallback strategies can be deliberately triggered by an adversary.
no code implementations • 13 Jun 2021 • Luca Cardelli, Marta Kwiatkowska, Luca Laurenti
We should ideally start from an integrated description of both the model and the steps carried out to test it, to concurrently analyze uncertainties in model parameters, equipment tolerances, and data collection.
1 code implementation • 21 May 2021 • Matthew Wicker, Luca Laurenti, Andrea Patane, Nicola Paoletti, Alessandro Abate, Marta Kwiatkowska
We consider the problem of computing reach-avoid probabilities for iterative predictions made with Bayesian neural network (BNN) models.
1 code implementation • 19 May 2021 • Benjie Wang, Clare Lyle, Marta Kwiatkowska
Robustness of decision rules to shifts in the data-generating process is crucial to the successful deployment of decision-making systems.
1 code implementation • 8 May 2021 • Emanuele La Malfa, Agnieszka Zbrzezny, Rhiannon Michelmore, Nicola Paoletti, Marta Kwiatkowska
We build on abduction-based explanations for ma-chine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP).
1 code implementation • 7 Apr 2021 • Andrea Patane, Arno Blaas, Luca Laurenti, Luca Cardelli, Stephen Roberts, Marta Kwiatkowska
Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive for safety-critical applications.
1 code implementation • 10 Feb 2021 • Matthew Wicker, Luca Laurenti, Andrea Patane, Zhoutong Chen, Zheng Zhang, Marta Kwiatkowska
We consider adversarial training of deep neural networks through the lens of Bayesian learning, and present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Emanuele La Malfa, Min Wu, Luca Laurenti, Benjie Wang, Anthony Hartshorn, Marta Kwiatkowska
Neural network NLP models are vulnerable to small modifications of the input that maintain the original meaning but result in a different prediction.
no code implementations • 1 May 2020 • Clare Lyle, Mark van der Wilk, Marta Kwiatkowska, Yarin Gal, Benjamin Bloem-Reddy
Many real world data analysis problems exhibit invariant structure, and models that take advantage of this structure have shown impressive empirical performance, particularly in deep learning.
1 code implementation • 21 Apr 2020 • Matthew Wicker, Luca Laurenti, Andrea Patane, Marta Kwiatkowska
We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations.
1 code implementation • ICML 2020 • Amy Zhang, Clare Lyle, Shagun Sodhani, Angelos Filos, Marta Kwiatkowska, Joelle Pineau, Yarin Gal, Doina Precup
Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challenges.
no code implementations • 29 Nov 2019 • Kyriakos Polymenakos, Luca Laurenti, Andrea Patane, Jan-Peter Calliess, Luca Cardelli, Marta Kwiatkowska, Alessandro Abate, Stephen Roberts
Gaussian Processes (GPs) are widely employed in control and learning because of their principled treatment of uncertainty.
no code implementations • 25 Sep 2019 • Luca Laurenti, Andrea Patane, Matthew Wicker, Luca Bortolussi, Luca Cardelli, Marta Kwiatkowska
We investigate global adversarial robustness guarantees for machine learning models.
no code implementations • 21 Sep 2019 • Rhiannon Michelmore, Matthew Wicker, Luca Laurenti, Luca Cardelli, Yarin Gal, Marta Kwiatkowska
Deep neural network controllers for autonomous driving have recently benefited from significant performance improvements, and have begun deployment in the real world.
no code implementations • NeurIPS 2019 • Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell
However if the adversary is restricted to perturbing $O(\log n)$ bits, then the class of monotone conjunctions can be robustly learned with respect to a general class of distributions (that includes the uniform distribution).
no code implementations • CVPR 2020 • Min Wu, Marta Kwiatkowska
The widespread adoption of deep learning models places demands on their robustness.
1 code implementation • 28 May 2019 • Arno Blaas, Andrea Patane, Luca Laurenti, Luca Cardelli, Marta Kwiatkowska, Stephen Roberts
We apply our method to investigate the robustness of GPC models on a 2D synthetic dataset, the SPAM dataset and a subset of the MNIST dataset, providing comparisons of different GPC training techniques, and show how our method can be used for interpretability analysis.
1 code implementation • CVPR 2019 • Matthew Wicker, Marta Kwiatkowska
Understanding the spatial arrangement and nature of real-world objects is of paramount importance to many complex engineering tasks, including autonomous navigation.
1 code implementation • 5 Mar 2019 • Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Nicola Paoletti, Andrea Patane, Matthew Wicker
We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction differs between the two.
no code implementations • 16 Nov 2018 • Rhiannon Michelmore, Marta Kwiatkowska, Yarin Gal
A rise in popularity of Deep Neural Networks (DNNs), attributed to more powerful GPUs and widely available datasets, has seen them being increasingly used within safety-critical domains.
1 code implementation • 17 Sep 2018 • Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Andrea Patane
Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and control to biological systems.
1 code implementation • 10 Jul 2018 • Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska
In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations.
2 code implementations • 6 May 2018 • Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska
Verifying correctness of deep neural networks (DNNs) is challenging.
2 code implementations • 30 Apr 2018 • Youcheng Sun, Min Wu, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska, Daniel Kroening
Concolic testing combines program execution and symbolic analysis to explore the execution paths of a software program.
2 code implementations • 16 Apr 2018 • Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroening, Marta Kwiatkowska
In this paper we focus on the $L_0$ norm and aim to compute, for a trained DNN and an input, the maximal radius of a safe norm ball around the input within which there are no adversarial examples.
no code implementations • 21 Oct 2017 • Matthew Wicker, Xiaowei Huang, Marta Kwiatkowska
In this paper, we focus on image classifiers and propose a feature-guided black-box approach to test the safety of deep neural networks that requires no such knowledge.
2 code implementations • 21 Oct 2016 • Xiaowei Huang, Marta Kwiatkowska, Sen Wang, Min Wu
Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations.