1 code implementation • 1 Jun 2022 • Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Alberto Veneri
Interpretable Learning to Rank (LtR) is an emerging field within the research area of explainable AI, aiming at developing intelligible and accurate predictive models.
no code implementations • 29 Dec 2021 • Seyum Assefa Abebe, Claudio Lucchese, Salvatore Orlando
Nowadays Machine Learning (ML) techniques are extensively adopted in many socially sensitive systems, thus requiring to carefully study the fairness of the decisions taken by such systems.
no code implementations • 5 Dec 2021 • Stefano Calzavara, Lorenzo Cazzaro, Claudio Lucchese, Federico Marcuzzi, Salvatore Orlando
In this paper we criticize the robustness measure traditionally employed to assess the performance of machine learning models deployed in adversarial settings.
1 code implementation • 6 May 2021 • Francesco Busolin, Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Salvatore Trani
Modern search engine ranking pipelines are commonly based on large machine-learned ensembles of regression trees.
no code implementations • 30 Apr 2020 • Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Salvatore Trani
In this paper, we investigate the novel problem of \textit{query-level early exiting}, aimed at deciding the profitability of early stopping the traversal of the ranking ensemble for all the candidate documents to be scored for a query, by simply returning a ranking based on the additive scores computed by a limited portion of the ensemble.
no code implementations • 7 Apr 2020 • Stefano Calzavara, Claudio Lucchese, Federico Marcuzzi, Salvatore Orlando
The attacker aims at finding a minimal perturbation of a test instance that changes the model outcome.
1 code implementation • 2 Jul 2019 • Stefano Calzavara, Claudio Lucchese, Gabriele Tolomei, Seyum Assefa Abebe, Salvatore Orlando
Despite its success and popularity, machine learning is now recognized as vulnerable to evasion attacks, i. e., carefully crafted perturbations of test inputs designed to force prediction errors.