no code implementations • 29 Apr 2023 • Mario Alviano, Francesco Bartoli, Marco Botta, Roberto Esposito, Laura Giordano, Daniele Theseider Dupré
In this paper we investigate the relationships between a multipreferential semantics for defeasible reasoning in knowledge representation and a multilayer neural network model.
no code implementations • 31 Mar 2023 • Bruno Casella, Roberto Esposito, Carlo Cavazzoni, Marco Aldinucci
Data carry a value that might vanish when shared with others; the ability to avoid sharing the data enables industrial applications where security and privacy are of paramount importance, making it possible to train global models by implementing only local policies which can be run independently and even on air-gapped data centres.
1 code implementation • 19 Mar 2023 • Bruno Casella, Roberto Esposito, Antonio Sciarappa, Carlo Cavazzoni, Marco Aldinucci
Training Deep Learning (DL) models require large, high-quality datasets, often assembled with data from different institutions.
1 code implementation • 15 Feb 2023 • Gianluca Mittone, Nicolò Tonci, Robert Birke, Iacopo Colonnelli, Doriana Medić, Andrea Bartolini, Roberto Esposito, Emanuele Parisi, Francesco Beneventi, Mirko Polato, Massimo Torquati, Luca Benini, Marco Aldinucci
Federated Learning (FL) and Edge Inference are examples of DML.
no code implementations • 4 Aug 2022 • Mattia Cerrato, Marius Köppel, Roberto Esposito, Stefan Kramer
In this paper, we propose a methodology for direct computation of the mutual information between a neural layer and a sensitive attribute.
no code implementations • 7 Feb 2022 • Mattia Cerrato, Alesia Vallenas Coronel, Marius Köppel, Alexander Segner, Roberto Esposito, Stefan Kramer
Neural network architectures have been extensively employed in the fair representation learning setting, where the objective is to learn a new representation for a given vector which is independent of sensitive information.
2 code implementations • 29 Jun 2020 • Roberto Esposito, Mattia Cerrato, Marco Locatelli
In this paper we propose a variant of the linear least squares model allowing practitioners to partition the input features into groups of variables that they require to contribute similarly to the final result.