no code implementations • 5 May 2024 • Peter Anthony, Francesco Giannini, Michelangelo Diligenti, Martin Homola, Marco Gori, Stefan Balogh, Jan Mojzis
Moreover, we introduce a tailored version of LENs that is shown to generate logic explanations with higher fidelity with respect to the model's predictions.
1 code implementation • 2 Feb 2024 • Gabriele Dominici, Pietro Barbiero, Francesco Giannini, Martin Gjoreski, Giuseppe Marra, Marc Langheinrich
"), and imagine alternative scenarios that could result in different predictions (the "What if?").
no code implementations • 23 Aug 2023 • Pietro Barbiero, Francesco Giannini, Gabriele Ciravegna, Michelangelo Diligenti, Giuseppe Marra
The design of interpretable deep learning models working in relational domains poses an open challenge: interpretable deep learning methods, such as Concept-Based Models (CBMs), are not designed to solve relational problems, while relational models are not as interpretable as CBMs.
1 code implementation • 27 Apr 2023 • Pietro Barbiero, Gabriele Ciravegna, Francesco Giannini, Mateo Espinosa Zarlenga, Lucie Charlotte Magister, Alberto Tonda, Pietro Lio', Frederic Precioso, Mateja Jamnik, Giuseppe Marra
Deep learning methods are highly accurate, yet their opaque decision process prevents them from earning full human trust.
no code implementations • 27 Apr 2023 • Pietro Barbiero, Stefano Fioravanti, Francesco Giannini, Alberto Tonda, Pietro Lio, Elena Di Lavore
Explainable AI (XAI) aims to address the human need for safe and reliable AI systems.
no code implementations • 23 Mar 2023 • Michelangelo Diligenti, Francesco Giannini, Stefano Fioravanti, Caterina Graziani, Moreno Falaschi, Giuseppe Marra
In this paper, we exploit logic rules to enhance the embedding representations of KGEs on the PharmKG dataset.
1 code implementation • 4 Nov 2022 • Rishabh Jain, Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Davide Buffelli, Pietro Lio
Recently, Logic Explained Networks (LENs) have been proposed as explainable-by-design neural models providing logic explanations for their predictions.
1 code implementation • 19 Sep 2022 • Mateo Espinosa Zarlenga, Pietro Barbiero, Gabriele Ciravegna, Giuseppe Marra, Francesco Giannini, Michelangelo Diligenti, Zohreh Shams, Frederic Precioso, Stefano Melacci, Adrian Weller, Pietro Lio, Mateja Jamnik
Deploying AI-powered systems requires trustworthy models supporting effective human interactions, going beyond raw prediction accuracy.
1 code implementation • 11 Aug 2021 • Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Marco Gori, Pietro Lió, Marco Maggini, Stefano Melacci
The language used to communicate the explanations must be formal enough to be implementable in a machine and friendly enough to be understandable by a wide audience.
3 code implementations • 12 Jun 2021 • Pietro Barbiero, Gabriele Ciravegna, Francesco Giannini, Pietro Lió, Marco Gori, Stefano Melacci
Explainable artificial intelligence has rapidly emerged since lawmakers have started requiring interpretable models for safety-critical domains.
Ranked #1 on Image Classification on CUB
no code implementations • 1 Jun 2021 • Giuseppe Marra, Michelangelo Diligenti, Francesco Giannini
However, they have been struggling at both dealing with the intrinsic uncertainty of the observations and scaling to real-world applications.
no code implementations • 6 Feb 2020 • Giuseppe Marra, Michelangelo Diligenti, Francesco Giannini, Marco Gori, Marco Maggini
Deep learning has been shown to achieve impressive results in several tasks where a large amount of training data is available.
no code implementations • 31 Aug 2019 • Francesco Giannini, Marco Maggini
A main property of support vector machines consists in the fact that only a small portion of the training data is significant to determine the maximum margin separating hyperplane in the feature space, the so called support vectors.
no code implementations • 26 Jul 2019 • Giuseppe Marra, Francesco Giannini, Michelangelo Diligenti, Marco Maggini, Marco Gori
Neural-symbolic approaches have recently gained popularity to inject prior knowledge into a learner without requiring it to induce this knowledge from data.
no code implementations • 18 Jul 2019 • Francesco Giannini, Giuseppe Marra, Michelangelo Diligenti, Marco Maggini, Marco Gori
Deep learning has been shown to achieve impressive results in several domains like computer vision and natural language processing.
no code implementations • 18 Mar 2019 • Giuseppe Marra, Francesco Giannini, Michelangelo Diligenti, Marco Gori
In spite of the amazing results obtained by deep learning in many applications, a real intelligent behavior of an agent acting in a complex environment is likely to require some kind of higher-level symbolic inference.
no code implementations • 14 Jan 2019 • Giuseppe Marra, Francesco Giannini, Michelangelo Diligenti, Marco Gori
Deep learning is very effective at jointly learning feature representations and classification models, especially when dealing with high dimensional input patterns.
no code implementations • 16 Jul 2018 • Giuseppe Marra, Francesco Giannini, Michelangelo Diligenti, Marco Gori
We use deep architectures to model the involved variables, and propose a computational scheme where the learning process carries out a satisfaction of the constraints.
1 code implementation • 10 Mar 2017 • Francesco Giannini, Vincenzo Laveglia, Alessandro Rossi, Dario Zanca, Andrea Zugarini
This report provides an introduction to some Machine Learning tools within the most common development environments.