no code implementations • 6 Feb 2024 • Parsa Moradi, Mohammad Ali Maddah-Ali
Resilience against stragglers is a critical element of prediction serving systems, tasked with executing inferences on input data for a pre-trained machine-learning model.
no code implementations • 20 Feb 2023 • Tayyebeh Jahani-Nezhad, Mohammad Ali Maddah-Ali, Giuseppe Caire
In this paper, we propose ByzSecAgg, an efficient secure aggregation scheme for federated learning that is protected against Byzantine attacks and privacy leakages.
no code implementations • 24 Mar 2022 • Tayyebeh Jahani-Nezhad, Mohammad Ali Maddah-Ali, Songze Li, Giuseppe Caire
We propose SwiftAgg+, a novel secure aggregation protocol for federated learning systems, where a central server aggregates local models of $N \in \mathbb{N}$ distributed users, each of size $L \in \mathbb{N}$, trained on their local data, in a privacy-preserving manner.
no code implementations • 8 Feb 2022 • Tayyebeh Jahani-Nezhad, Mohammad Ali Maddah-Ali, Songze Li, Giuseppe Caire
We propose SwiftAgg, a novel secure aggregation protocol for federated learning systems, where a central server aggregates local models of $N$ distributed users, each of size $L$, trained on their local data, in a privacy-preserving manner.
no code implementations • 2 Mar 2021 • Tayyebeh Jahani-Nezhad, Mohammad Ali Maddah-Ali
Gradient coding allows a master node to derive the aggregate of the partial gradients, calculated by some worker nodes over the local data sets, with minimum communication cost, and in the presence of stragglers.
no code implementations • 17 Sep 2020 • Tayyebeh Jahani-Nezhad, Mohammad Ali Maddah-Ali
In this technique, coding is used across data sets, and computation is done over coded data, such that the results of an arbitrary subset of worker nodes with a certain size are enough to recover the final results.
no code implementations • 16 Aug 2020 • Mahdi Nouri Boroujerdi, Mohammad Akbari, Roghayeh Joda, Mohammad Ali Maddah-Ali, Babak Hossein Khalaj
In this paper, we employ deep reinforcement learning to develop a novel radio resource allocation and packet scheduling scheme for different Quality of Service (QoS) requirements applicable to LTEadvanced and 5G networks.
no code implementations • 27 Mar 2020 • Naeimeh Omidvar, Mohammad Ali Maddah-Ali, Hamed Mahdavi
In this paper, we propose a method of distributed stochastic gradient descent (SGD), with low communication load and computational complexity, and still fast convergence.
1 code implementation • 26 Mar 2020 • Hamidreza Ehteram, Mohammad Ali Maddah-Ali, Mahtab Mirmohseni
One of the major challenges in this setup is to guarantee the privacy of the client data.
no code implementations • 17 Oct 2017 • Qian Yu, Mohammad Ali Maddah-Ali, A. Salman Avestimehr
We consider the problem of computing the Fourier transform of high-dimensional vectors, distributedly over a cluster of machines consisting of a master node and multiple worker nodes, where the worker nodes can only store and process a fraction of the inputs.
3 code implementations • NeurIPS 2017 • Qian Yu, Mohammad Ali Maddah-Ali, A. Salman Avestimehr
We consider a large-scale matrix multiplication problem where the computation is carried out using a distributed system with a master node and multiple worker nodes, where each worker can store parts of the input matrices.
Information Theory Distributed, Parallel, and Cluster Computing Information Theory
2 code implementations • 16 Feb 2017 • Songze Li, Sucha Supittayapornpong, Mohammad Ali Maddah-Ali, A. Salman Avestimehr
We focus on sorting, which is the building block of many machine learning algorithms, and propose a novel distributed sorting algorithm, named Coded TeraSort, which substantially improves the execution time of the TeraSort benchmark in Hadoop MapReduce.
Distributed, Parallel, and Cluster Computing Information Theory Information Theory