Search Results for author: Roberto Esposito

Found 7 papers, 3 papers with code

A preferential interpretation of MultiLayer Perceptrons in a conditional logic with typicality

no code implementations29 Apr 2023 Mario Alviano, Francesco Bartoli, Marco Botta, Roberto Esposito, Laura Giordano, Daniele Theseider Dupré

In this paper we investigate the relationships between a multipreferential semantics for defeasible reasoning in knowledge representation and a multilayer neural network model.

Benchmarking FedAvg and FedCurv for Image Classification Tasks

no code implementations31 Mar 2023 Bruno Casella, Roberto Esposito, Carlo Cavazzoni, Marco Aldinucci

Data carry a value that might vanish when shared with others; the ability to avoid sharing the data enables industrial applications where security and privacy are of paramount importance, making it possible to train global models by implementing only local policies which can be run independently and even on air-gapped data centres.

Benchmarking Classification +2

Experimenting with Normalization Layers in Federated Learning on non-IID scenarios

1 code implementation19 Mar 2023 Bruno Casella, Roberto Esposito, Antonio Sciarappa, Carlo Cavazzoni, Marco Aldinucci

Training Deep Learning (DL) models require large, high-quality datasets, often assembled with data from different institutions.

Federated Learning Privacy Preserving

Invariant Representations with Stochastically Quantized Neural Networks

no code implementations4 Aug 2022 Mattia Cerrato, Marius Köppel, Roberto Esposito, Stefan Kramer

In this paper, we propose a methodology for direct computation of the mutual information between a neural layer and a sensitive attribute.

Attribute Representation Learning

Fair Interpretable Representation Learning with Correction Vectors

no code implementations7 Feb 2022 Mattia Cerrato, Alesia Vallenas Coronel, Marius Köppel, Alexander Segner, Roberto Esposito, Stefan Kramer

Neural network architectures have been extensively employed in the fair representation learning setting, where the objective is to learn a new representation for a given vector which is independent of sensitive information.

Representation Learning

Partitioned Least Squares

2 code implementations29 Jun 2020 Roberto Esposito, Mattia Cerrato, Marco Locatelli

In this paper we propose a variant of the linear least squares model allowing practitioners to partition the input features into groups of variables that they require to contribute similarly to the final result.

Cannot find the paper you are looking for? You can Submit a new open access paper.