no code implementations • 14 Apr 2024 • Francesco Binucci, Paolo Banelli, Paolo Di Lorenzo, Sergio Barbarossa
This approach is particularly useful every time a device needs to transmit data (or features) to a server that has to fulfil an inference task, as it provides a principled way to extract the most relevant features for the task to be executed, while looking for the best trade-off between the size of the feature vector to be transmitted, inference accuracy, and complexity.
no code implementations • 25 Mar 2024 • Simone Fiorellino, Claudio Battiloro, Emilio Calvanese Strinati, Paolo Di Lorenzo
This paper presents a novel framework for goal-oriented semantic communication, leveraging relative representations to mitigate semantic mismatches via latent space alignment.
no code implementations • 14 Feb 2024 • Theodore Papamarkou, Tolga Birdal, Michael Bronstein, Gunnar Carlsson, Justin Curry, Yue Gao, Mustafa Hajij, Roland Kwitt, Pietro Liò, Paolo Di Lorenzo, Vasileios Maroulas, Nina Miolane, Farzana Nasrin, Karthikeyan Natesan Ramamurthy, Bastian Rieck, Simone Scardapane, Michael T. Schaub, Petar Veličković, Bei Wang, Yusu Wang, Guo-Wei Wei, Ghada Zamzmi
Topological deep learning (TDL) is a rapidly evolving field that uses topological features to understand and design deep learning models.
no code implementations • 12 Feb 2024 • Emilio Calvanese Strinati, Paolo Di Lorenzo, Vincenzo Sciancalepore, Adnan Aijaz, Marios Kountouris, Deniz Gündüz, Petar Popovski, Mohamed Sana, Photios A. Stavrou, Beatriz Soret, Nicola Cordeschi, Simone Scardapane, Mattia Merluzzi, Lanfranco Zanzi, Mauro Boldi Renato, Tony Quek, Nicola di Pietro, Olivier Forceville, Francesca Costanzo, Peizheng Li
Recent advances in AI technologies have notably expanded device intelligence, fostering federation and cooperation among distributed AI agents.
1 code implementation • 19 Dec 2023 • Fabio Saggese, Victor Croisfelt, Francesca Costanzo, Junya Shiraishi, Radosław Kotaba, Paolo Di Lorenzo, Petar Popovski
This paper investigates the role and the impact of control operations for dynamic mobile edge computing (MEC) empowered by Reconfigurable Intelligent Surfaces (RISs), in which multiple devices offload their computation tasks to an access point (AP) equipped with an edge server (ES), with the help of the RIS.
no code implementations • 6 Dec 2023 • Francesco Binucci, Mattia Merluzzi, Paolo Banelli, Emilio Calvanese Strinati, Paolo Di Lorenzo
In this work, we explore the opportunity of DNN splitting at the edge of 6G wireless networks to enable low energy cooperative inference with target delay and accuracy with a goal-oriented perspective.
1 code implementation • 27 Nov 2023 • Gabriele D'Acunto, Paolo Di Lorenzo, Francesco Bonchi, Stefania Sardellitti, Sergio Barbarossa
Despite the large research effort devoted to learning dependencies between time series, the state of the art still faces a major limitation: existing methods learn partial correlations but fail to discriminate across distinct frequency bands.
no code implementations • 21 Oct 2023 • Paolo Di Lorenzo, Mattia Merluzzi, Francesco Binucci, Claudio Battiloro, Paolo Banelli, Emilio Calvanese Strinati, Sergio Barbarossa
Internet of Things (IoT) applications combine sensing, wireless communication, intelligence, and actuation, enabling the interaction among heterogeneous devices that collect and process considerable amounts of data.
1 code implementation • 5 Sep 2023 • Claudio Battiloro, Lucia Testa, Lorenzo Giusti, Stefania Sardellitti, Paolo Di Lorenzo, Sergio Barbarossa
The aim of this work is to introduce Generalized Simplicial Attention Neural Networks (GSANs), i. e., novel neural architectures designed to process data defined on simplicial complexes using masked self-attentional layers.
no code implementations • 25 May 2023 • Claudio Battiloro, Indro Spinelli, Lev Telyatnikov, Michael Bronstein, Simone Scardapane, Paolo Di Lorenzo
Latent Graph Inference (LGI) relaxed the reliance of Graph Neural Networks (GNNs) on a given graph topology by dynamically learning it.
no code implementations • 18 May 2023 • Kyriakos Stylianopoulos, Mattia Merluzzi, Paolo Di Lorenzo, George C. Alexandropoulos
In this paper, we propose a novel algorithm for energy-efficient, low-latency, accurate inference at the wireless edge, in the context of 6G networks endowed with reconfigurable intelligent surfaces (RISs).
no code implementations • 3 May 2023 • Francesco Binucci, Paolo Banelli, Paolo Di Lorenzo, Sergio Barbarossa
A common challenge in running inference tasks from remote is to extract and transmit only the features that are most significant for the inference task.
no code implementations • 20 Mar 2023 • Claudio Battiloro, Zhiyang Wang, Hans Riess, Paolo Di Lorenzo, Alejandro Ribeiro
We define tangent bundle filters and tangent bundle neural networks (TNNs) based on this convolution operation, which are novel continuous architectures operating on tangent bundle signals, i. e. vector fields over the manifolds.
no code implementations • 16 Feb 2023 • Claudio Battiloro, Stefania Sardellitti, Sergio Barbarossa, Paolo Di Lorenzo
Weighing the topological domain over which data can be represented and analysed is a key strategy in many signal processing and machine learning applications, enabling the extraction and exploitation of meaningful data features and their (higher order) relationships.
1 code implementation • 26 Oct 2022 • Claudio Battiloro, Paolo Di Lorenzo, Sergio Barbarossa
This paper introduces topological Slepians, i. e., a novel class of signals defined over topological spaces (e. g., simplicial complexes) that are maximally concentrated on the topological domain (e. g., over a set of nodes, edges, triangles, etc.)
no code implementations • 26 Oct 2022 • Claudio Battiloro, Zhiyang Wang, Hans Riess, Paolo Di Lorenzo, Alejandro Ribeiro
In this work we introduce a convolution operation over the tangent bundle of Riemannian manifolds exploiting the Connection Laplacian operator.
1 code implementation • 11 Oct 2022 • Domenico Mattia Cinque, Claudio Battiloro, Paolo Di Lorenzo
The goal of this paper is to introduce pooling strategies for simplicial convolutional neural networks.
1 code implementation • 16 Sep 2022 • Lorenzo Giusti, Claudio Battiloro, Lucia Testa, Paolo Di Lorenzo, Stefania Sardellitti, Sergio Barbarossa
In this paper, we introduce Cell Attention Networks (CANs), a neural architecture operating on data defined over the vertices of a graph, representing the graph as the 1-skeleton of a cell complex introduced to capture higher order interactions.
Ranked #7 on Graph Classification on NCI109
no code implementations • 16 Jul 2022 • Gabriele D'Acunto, Paolo Di Lorenzo, Sergio Barbarossa
The inference of causal structures from observed data plays a key role in unveiling the underlying dynamics of the system.
no code implementations • 21 Apr 2022 • Mattia Merluzzi, Claudio Battiloro, Paolo Di Lorenzo, Emilio Calvanese Strinati
Learning at the edge is a challenging task from several perspectives, since data must be collected by end devices (e. g. sensors), possibly pre-processed (e. g. data compression), and finally processed remotely to output the result of training and/or inference phases.
no code implementations • 25 Feb 2022 • Francesco Pezone, Sergio Barbarossa, Paolo Di Lorenzo
The IB principle is used to design the encoder in order to find an optimal balance between representation complexity and relevance of the encoded data with respect to the goal.
no code implementations • 21 Dec 2021 • Paolo Di Lorenzo, Mattia Merluzzi, Emilio Calvanese Strinati, Sergio Barbarossa
In this paper, we propose a novel algorithm for energy-efficient, low-latency dynamic mobile edge computing (MEC), in the context of beyond 5G networks endowed with Reconfigurable Intelligent Surfaces (RISs).
no code implementations • 8 Aug 2020 • Mattia Merluzzi, Nicola di Pietro, Paolo Di Lorenzo, Emilio Calvanese Strinati, Sergio Barbarossa
We propose a novel strategy for energy-efficient dynamic computation offloading, in the context of edge-computing-aided beyond 5G networks.
no code implementations • 13 Jul 2020 • Simone Scardapane, Indro Spinelli, Paolo Di Lorenzo
After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents.
no code implementations • 30 Apr 2020 • Paolo Di Lorenzo, Simone Scardapane
We study distributed stochastic nonconvex optimization in multi-agent networks.
no code implementations • 12 Sep 2017 • Paolo Di Lorenzo, Paolo Banelli, Elvin Isufi, Sergio Barbarossa, Geert Leus
Numerical simulations carried out over both synthetic and real data illustrate the good performance of the proposed sampling and reconstruction strategies for (possibly distributed) adaptive learning of signals defined over graphs.
1 code implementation • 15 Jun 2017 • Simone Scardapane, Paolo Di Lorenzo
Additionally, we show how the algorithm can be easily parallelized over multiple computational units without hindering its performance.
1 code implementation • 24 Oct 2016 • Simone Scardapane, Paolo Di Lorenzo
The aim of this paper is to develop a general framework for training neural networks (NNs) in a distributed environment, where training data is partitioned over a set of agents that communicate with each other through a sparse, possibly time-varying, connectivity pattern.
no code implementations • 18 Feb 2016 • Paolo Di Lorenzo, Sergio Barbarossa, Paolo Banelli, Stefania Sardellitti
The aim of this paper is to propose a least mean squares (LMS) strategy for adaptive estimation of signals defined over graphs.