no code implementations • 22 Oct 2022 • Abhinandan Pal, Francesco Ranzato, Caterina Urban, Marco Zanella
We leverage this abstraction in two ways: (1) to enhance the interpretability of SVMs by deriving a novel feature importance measure, called abstract feature importance (AFI), that does not depend in any way on a given dataset of the accuracy of the SVM and is very fast to compute, and (2) for verifying stability, notably individual fairness, of SVMs and producing concrete counterexamples when the verification fails.
no code implementations • 4 Jan 2021 • Francesco Ranzato, Caterina Urban, Marco Zanella
We study the problem of formally verifying individual fairness of decision tree ensembles, as well as training tree models which maximize both accuracy and individual fairness.
no code implementations • 21 Dec 2020 • Francesco Ranzato, Marco Zanella
We put forward a novel learning methodology for ensembles of decision trees based on a genetic algorithm which is able to train a decision tree for maximizing both its accuracy and its robustness to adversarial perturbations.
no code implementations • 26 Apr 2019 • Francesco Ranzato, Marco Zanella
We study the problem of formally verifying the robustness to adversarial examples of support vector machines (SVMs), a major machine learning model for classification and regression tasks.