Search Results for author: Marco Zanella

Found 4 papers, 0 papers with code

Abstract Interpretation-Based Feature Importance for SVMs

no code implementations22 Oct 2022 Abhinandan Pal, Francesco Ranzato, Caterina Urban, Marco Zanella

We leverage this abstraction in two ways: (1) to enhance the interpretability of SVMs by deriving a novel feature importance measure, called abstract feature importance (AFI), that does not depend in any way on a given dataset of the accuracy of the SVM and is very fast to compute, and (2) for verifying stability, notably individual fairness, of SVMs and producing concrete counterexamples when the verification fails.

Fairness Feature Importance

Fair Training of Decision Tree Classifiers

no code implementations4 Jan 2021 Francesco Ranzato, Caterina Urban, Marco Zanella

We study the problem of formally verifying individual fairness of decision tree ensembles, as well as training tree models which maximize both accuracy and individual fairness.

Fairness

Genetic Adversarial Training of Decision Trees

no code implementations21 Dec 2020 Francesco Ranzato, Marco Zanella

We put forward a novel learning methodology for ensembles of decision trees based on a genetic algorithm which is able to train a decision tree for maximizing both its accuracy and its robustness to adversarial perturbations.

Robustness Verification of Support Vector Machines

no code implementations26 Apr 2019 Francesco Ranzato, Marco Zanella

We study the problem of formally verifying the robustness to adversarial examples of support vector machines (SVMs), a major machine learning model for classification and regression tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.