no code implementations • 25 Mar 2024 • Fernando Acero, Parisa Zehtabi, Nicolas Marchesotti, Michael Cashmore, Daniele Magazzeni, Manuela Veloso
Portfolio optimization involves determining the optimal allocation of portfolio assets in order to maximize a given investment objective.
no code implementations • 13 Mar 2024 • Shubham Sharma, Sanghamitra Dutta, Emanuele Albini, Freddy Lecue, Daniele Magazzeni, Manuela Veloso
In this paper, we introduce the problem of feature \emph{reselection}, so that features can be selected with respect to secondary model performance characteristics efficiently even after a feature selection process has been done with respect to a primary objective.
no code implementations • 23 Nov 2023 • Sikha Pentyala, Shubham Sharma, Sanjay Kariyappa, Freddy Lecue, Daniele Magazzeni
We observe that PrivRecourse can provide paths that are private and realistic.
no code implementations • 9 Nov 2023 • Zikai Xiong, Niccolò Dalmasso, Shubham Sharma, Freddy Lecue, Daniele Magazzeni, Vamsi K. Potluru, Tucker Balch, Manuela Veloso
In this work, we present fair Wasserstein coresets (FWC), a novel coreset approach which generates fair synthetic representative samples along with sample-level weights to be used in downstream learning tasks.
no code implementations • 17 Jul 2023 • Kyle Mana, Fernando Acero, Stephen Mak, Parisa Zehtabi, Michael Cashmore, Daniele Magazzeni, Manuela Veloso
Discrete optimization belongs to the set of $\mathcal{NP}$-hard problems, spanning fields such as mixed-integer programming and combinatorial optimization.
no code implementations • 13 Jul 2023 • Emanuele Albini, Shubham Sharma, Saumitra Mishra, Danial Dervovic, Daniele Magazzeni
Explainable Artificial Intelligence (XAI) has received widespread interest in recent years, and two of the most popular types of explanations are feature attributions, and counterfactual explanations.
no code implementations • 10 Jul 2023 • Sanjay Kariyappa, Leonidas Tsepenekas, Freddy Lécué, Daniele Magazzeni
While any method to compute SHAP values with uncertainty estimates (such as KernelSHAP and SamplingSHAP) can be trivially adapted to solve TkIP, doing so is highly sample inefficient.
1 code implementation • 26 May 2023 • Dan Ley, Saumitra Mishra, Daniele Magazzeni
Counterfactual explanations have been widely studied in explainability, with a range of application dependent methods prominent in fairness, recourse and model understanding.
1 code implementation • 19 May 2023 • Faisal Hamman, Erfaun Noorani, Saumitra Mishra, Daniele Magazzeni, Sanghamitra Dutta
There is an emerging interest in generating robust counterfactual explanations that would remain valid if the model is updated or changed even slightly.
no code implementations • 21 Jan 2023 • Natraj Raman, Daniele Magazzeni, Sameena Shah
Counterfactual explanations utilize feature perturbations to analyze the outcome of an original decision and recommend an actionable recourse.
no code implementations • 21 Nov 2022 • Joshua Lockhart, Daniele Magazzeni, Manuela Veloso
The Concept Bottleneck Models (CBMs) of Koh et al. [2020] provide a means to ensure that a neural network based classifier bases its predictions solely on human understandable concepts.
no code implementations • 11 Nov 2022 • Danial Dervovic, Nicolas Marchesotti, Freddy Lecue, Daniele Magazzeni
We introduce a family of interpretable machine learning models, with two broad additions: Linearised Additive Models (LAMs) which replace the ubiquitous logistic link function in General Additive Models (GAMs); and SubscaleHedge, an expert advice algorithm for combining base models trained on subsets of features called subscales.
no code implementations • 7 Nov 2022 • Joshua Lockhart, Nicolas Marchesotti, Daniele Magazzeni, Manuela Veloso
Concept bottleneck models perform classification by first predicting which of a list of human provided concepts are true about a datapoint.
no code implementations • 5 Oct 2022 • Mattia Villani, Joshua Lockhart, Daniele Magazzeni
Feature importance techniques have enjoyed widespread attention in the explainable AI literature as a means of determining how trained machine learning models make their predictions.
no code implementations • 6 Jul 2022 • Sanghamitra Dutta, Jason Long, Saumitra Mishra, Cecilia Tilli, Daniele Magazzeni
In this work, we propose a novel strategy -- that we call RobX -- to generate robust counterfactuals for tree-based ensembles, e. g., XGBoost.
no code implementations • 14 Apr 2022 • Dan Ley, Saumitra Mishra, Daniele Magazzeni
Counterfactual explanations have been widely studied in explainability, with a range of application dependent methods emerging in fairness, recourse and model understanding.
no code implementations • 23 Mar 2022 • Tiffany Tuor, Joshua Lockhart, Daniele Magazzeni
Our proposed approach enhances conventional federated learning techniques to make them suitable for this asynchronous training in this intra-organisation, cross-silo setting.
no code implementations • 16 Mar 2022 • Alberto Pozanco, Francesca Mosca, Parisa Zehtabi, Daniele Magazzeni, Sarit Kraus
The EXPRES framework consists of: (i) an explanation generator that, based on a Mixed-Integer Linear Programming model, finds the best set of reasons that can explain an unsatisfied preference; and (ii) an explanation parser, which translates the generated explanations into human interpretable ones.
no code implementations • 14 Mar 2022 • Marc Rigter, Danial Dervovic, Parisa Hassanzadeh, Jason Long, Parisa Zehtabi, Daniele Magazzeni
To improve the scalability of our approach to a greater number of task classes, we present an approximation based on state abstraction.
no code implementations • 30 Oct 2021 • Saumitra Mishra, Sanghamitra Dutta, Jason Long, Daniele Magazzeni
There exist several methods that aim to address the crucial task of understanding the behaviour of AI/ML models.
2 code implementations • 27 Oct 2021 • Emanuele Albini, Jason Long, Danial Dervovic, Daniele Magazzeni
Feature attributions are a common paradigm for model explanations due to their simplicity in assigning a single numeric score for each input feature to a model.
1 code implementation • 10 Oct 2021 • Yufei Wu, Mahmoud Mahfouz, Daniele Magazzeni, Manuela Veloso
The success of machine learning models in the financial domain is highly reliant on the quality of the data representation.
no code implementations • 10 Oct 2021 • Yufei Wu, Mahmoud Mahfouz, Daniele Magazzeni, Manuela Veloso
The success of deep learning-based limit order book forecasting models is highly dependent on the quality and the robustness of the input data representation.
no code implementations • EMNLP (FEVER) 2021 • Neema Kotonya, Thomas Spooner, Daniele Magazzeni, Francesca Toni
This paper presents an end-to-end system for fact extraction and verification using textual and tabular evidence, the performance of which we demonstrate on the FEVEROUS dataset.
no code implementations • 29 Jun 2021 • Thomas Spooner, Danial Dervovic, Jason Long, Jon Shepard, Jiahao Chen, Daniele Magazzeni
We present a new method for counterfactual explanations (CFEs) based on Bayesian optimisation that applies to both classification and regression models.
no code implementations • 29 Mar 2021 • Benjamin Krarup, Senka Krivic, Daniele Magazzeni, Derek Long, Michael Cashmore, David E. Smith
We formally define model-based compilations in PDDL2. 1 of each constraint derived from a user question in the taxonomy, and empirically evaluate the compilations in terms of computational complexity.
no code implementations • 17 Nov 2019 • Michael Cashmore, Alessandro Cimatti, Daniele Magazzeni, Andrea Micheli, Parisa Zehtabi
One of the major limitations for the employment of model-based planning and scheduling in practical applications is the need of costly re-planning when an incongruence between the observed reality and the formal model is encountered during execution.
no code implementations • 14 Aug 2019 • Michael Cashmore, Anna Collins, Benjamin Krarup, Senka Krivic, Daniele Magazzeni, David Smith
Explainable AI is an important area of research within which Explainable Planning is an emerging topic.
no code implementations • 15 Oct 2018 • Rita Borgo, Michael Cashmore, Daniele Magazzeni
In order to engender trust in AI, humans must understand what an AI system is trying to achieve, and why.
no code implementations • 11 Jul 2018 • Luca Viganò, Daniele Magazzeni
The Defense Advanced Research Projects Agency (DARPA) recently launched the Explainable Artificial Intelligence (XAI) program that aims to create a suite of new AI techniques that enable end users to understand, appropriately trust, and effectively manage the emerging generation of AI systems.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 29 Sep 2017 • Maria Fox, Derek Long, Daniele Magazzeni
As AI is increasingly being adopted into application solutions, the challenge of supporting interaction with humans is becoming more apparent.
no code implementations • 12 Apr 2017 • Marcello Balduccini, Daniele Magazzeni, Marco Maratea, Emily LeBlanc
CASP is an extension of ASP that allows for numerical constraints to be added in the rules.
no code implementations • 31 Aug 2016 • Marcello Balduccini, Daniele Magazzeni, Marco Maratea
PDDL+ is an extension of PDDL that enables modelling planning domains with mixed discrete-continuous dynamics.
no code implementations • 23 Jan 2014 • Maria Fox, Derek Long, Daniele Magazzeni
Application of the approach leads to construction of policies that, in simulation, significantly outperform those that are currently in use and the best published solutions to the battery management problem.