no code implementations • 30 Apr 2024 • Rayan Mazouz, John Skovbekk, Frederik Baymler Mathiesen, Eric Frew, Luca Laurenti, Morteza Lahijanian
This paper introduces a method of identifying a maximal set of safe strategies from data for stochastic systems with unknown dynamics using barrier certificates.
no code implementations • 22 Mar 2024 • Eduardo Figueiredo, Andrea Patane, Morteza Lahijanian, Luca Laurenti
Uncertainty propagation in non-linear dynamical systems has become a key problem in various fields including control theory and machine learning.
no code implementations • 8 Jan 2024 • Frederik Baymler Mathiesen, Morteza Lahijanian, Luca Laurenti
In this paper, we present IntervalMDP. jl, a Julia package for probabilistic analysis of interval Markov Decision Processes (IMDPs).
no code implementations • 3 Oct 2023 • Luca Laurenti, Morteza Lahijanian
Providing safety guarantees for stochastic dynamical systems has become a central problem in many fields, including control theory, machine learning, and robotics.
1 code implementation • 3 Oct 2023 • Matthew Wicker, Luca Laurenti, Andrea Patane, Nicola Paoletti, Alessandro Abate, Marta Kwiatkowska
Such computed lower bounds provide safety certification for the given policy and BNN model.
no code implementations • 19 Sep 2023 • John Skovbekk, Luca Laurenti, Eric Frew, Morteza Lahijanian
We introduce a general procedure for the finite abstraction of nonlinear stochastic systems with non-standard (e. g., non-affine, non-symmetric, non-unimodal) noise distributions for verification purposes.
no code implementations • 12 Sep 2023 • Robert Reed, Luca Laurenti, Morteza Lahijanian
Deep Kernel Learning (DKL) combines the representational power of neural networks with the uncertainty quantification of Gaussian Processes.
1 code implementation • 23 Jun 2023 • Matthew Wicker, Andrea Patane, Luca Laurenti, Marta Kwiatkowska
We study the problem of certifying the robustness of Bayesian neural networks (BNNs) to adversarial input perturbations.
1 code implementation • 19 Jun 2023 • Steven Adams, Andrea Patane, Morteza Lahijanian, Luca Laurenti
In this paper, we introduce BNN-DP, an efficient algorithmic framework for analysis of adversarial robustness of Bayesian Neural Networks (BNNs).
1 code implementation • 21 Apr 2023 • Alice Doherty, Matthew Wicker, Luca Laurenti, Andrea Patane
We study Individual Fairness (IF) for Bayesian neural networks (BNNs).
no code implementations • 10 Apr 2023 • Frederik Baymler Mathiesen, Licio Romao, Simeon C. Calvert, Alessandro Abate, Luca Laurenti
In particular, we show that the stochastic program to synthesize a SBF can be relaxed into a chance-constrained optimisation problem on which scenario approach theory applies.
no code implementations • 29 Dec 2022 • Ibon Gracia, Dimitris Boskos, Morteza Lahijanian, Luca Laurenti, Manuel Mazo Jr
The framework we present first learns an abstraction of a switched stochastic system as a robust Markov decision process (robust MDP) by accounting for both the stochasticity of the system and the uncertainty in the noise distribution.
no code implementations • 2 Nov 2022 • Giannis Delimpaltadakis, Morteza Lahijanian, Manuel Mazo Jr., Luca Laurenti
Interval Markov Decision Processes (IMDPs) are finite-state uncertain Markov models, where the transition probabilities belong to intervals.
2 code implementations • 13 Jul 2022 • Luca Bortolussi, Ginevra Carbone, Luca Laurenti, Andrea Patane, Guido Sanguinetti, Matthew Wicker
Despite significant efforts, both practical and theoretical, training deep learning models robust to adversarial attacks is still an open problem.
1 code implementation • 15 Jun 2022 • Rayan Mazouz, Karan Muvvala, Akash Ratheesh, Luca Laurenti, Morteza Lahijanian
A key step in our method is the employment of the recent convex approximation results for NNs to find piece-wise linear bounds, which allow the formulation of the barrier function synthesis problem as a sum-of-squares optimization program.
1 code implementation • 3 Jun 2022 • Frederik Baymler Mathiesen, Simeon Calvert, Luca Laurenti
In this paper, we parameterize a barrier function as a neural network and show that techniques for robust training of neural networks can be successfully employed to find neural barrier functions.
1 code implementation • 11 May 2022 • Elias Benussi, Andrea Patane, Matthew Wicker, Luca Laurenti, Marta Kwiatkowska
We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (NNs).
no code implementations • 11 Mar 2022 • Steven Adams, Morteza Lahijanian, Luca Laurenti
Neural networks (NNs) are emerging as powerful tools to represent the dynamics of control systems with complicated physics or black-box components.
no code implementations • 21 Feb 2022 • Giannis Delimpaltadakis, Luca Laurenti, Manuel Mazo Jr
Analyzing Event-Triggered Control's (ETC) sampling behaviour is of paramount importance, as it enables formal assessment of its sampling performance and prediction of its sampling patterns.
no code implementations • 31 Dec 2021 • John Jackson, Luca Laurenti, Eric Frew, Morteza Lahijanian
In this article, we develop a framework for verifying partially-observable, discrete-time dynamical systems with unmodelled dynamics against temporal logic specifications from a given input-output dataset.
no code implementations • 11 Oct 2021 • John Jackson, Luca Laurenti, Eric Frew, Morteza Lahijanian
The online controller may improve the baseline guarantees since it avoids the discretization error and reduces regression error as new data is collected.
no code implementations • 13 Jun 2021 • Luca Cardelli, Marta Kwiatkowska, Luca Laurenti
We should ideally start from an integrated description of both the model and the steps carried out to test it, to concurrently analyze uncertainties in model parameters, equipment tolerances, and data collection.
1 code implementation • 21 May 2021 • Matthew Wicker, Luca Laurenti, Andrea Patane, Nicola Paoletti, Alessandro Abate, Marta Kwiatkowska
We consider the problem of computing reach-avoid probabilities for iterative predictions made with Bayesian neural network (BNN) models.
1 code implementation • 7 Apr 2021 • Andrea Patane, Arno Blaas, Luca Laurenti, Luca Cardelli, Stephen Roberts, Marta Kwiatkowska
Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive for safety-critical applications.
no code implementations • 5 Apr 2021 • John Jackson, Luca Laurenti, Eric Frew, Morteza Lahijanian
We present a data-driven framework for strategy synthesis for partially-known switched stochastic systems.
no code implementations • 25 Mar 2021 • Giannis Delimpaltadakis, Luca Laurenti, Manuel Mazo Jr
Recently, there have been efforts towards understanding the sampling behaviour of event-triggered control (ETC), for obtaining metrics on its sampling performance and predicting its sampling patterns.
1 code implementation • 10 Feb 2021 • Matthew Wicker, Luca Laurenti, Andrea Patane, Zhoutong Chen, Zheng Zhang, Marta Kwiatkowska
We consider adversarial training of deep neural networks through the lens of Bayesian learning, and present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees.
1 code implementation • pproximateinference AABI Symposium 2021 • Matthew Yuan, Matthew Wicker, Luca Laurenti
In particular, we consider genetic algorithms, surrogate models, as well as zeroth order optimization methods and adapt them to the goal of finding adversarial examples for BNNs.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Emanuele La Malfa, Min Wu, Luca Laurenti, Benjie Wang, Anthony Hartshorn, Marta Kwiatkowska
Neural network NLP models are vulnerable to small modifications of the input that maintain the original meaning but result in a different prediction.
1 code implementation • 21 Apr 2020 • Matthew Wicker, Luca Laurenti, Andrea Patane, Marta Kwiatkowska
We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations.
1 code implementation • NeurIPS 2020 • Ginevra Carbone, Matthew Wicker, Luca Laurenti, Andrea Patane, Luca Bortolussi, Guido Sanguinetti
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications.
no code implementations • 29 Nov 2019 • Kyriakos Polymenakos, Luca Laurenti, Andrea Patane, Jan-Peter Calliess, Luca Cardelli, Marta Kwiatkowska, Alessandro Abate, Stephen Roberts
Gaussian Processes (GPs) are widely employed in control and learning because of their principled treatment of uncertainty.
no code implementations • 25 Sep 2019 • Luca Laurenti, Andrea Patane, Matthew Wicker, Luca Bortolussi, Luca Cardelli, Marta Kwiatkowska
We investigate global adversarial robustness guarantees for machine learning models.
no code implementations • 21 Sep 2019 • Rhiannon Michelmore, Matthew Wicker, Luca Laurenti, Luca Cardelli, Yarin Gal, Marta Kwiatkowska
Deep neural network controllers for autonomous driving have recently benefited from significant performance improvements, and have begun deployment in the real world.
1 code implementation • 28 May 2019 • Arno Blaas, Andrea Patane, Luca Laurenti, Luca Cardelli, Marta Kwiatkowska, Stephen Roberts
We apply our method to investigate the robustness of GPC models on a 2D synthetic dataset, the SPAM dataset and a subset of the MNIST dataset, providing comparisons of different GPC training techniques, and show how our method can be used for interpretability analysis.
1 code implementation • 5 Mar 2019 • Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Nicola Paoletti, Andrea Patane, Matthew Wicker
We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction differs between the two.
1 code implementation • 17 Sep 2018 • Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Andrea Patane
Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and control to biological systems.