no code implementations • 27 Apr 2023 • Christian A. Schroth, Stefan Vlaski, Abdelhak M. Zoubir
Classically, aggregation in distributed learning is based on averaging, which is statistically efficient, but susceptible to attacks by even a small number of malicious agents.
no code implementations • 14 Apr 2023 • Shreya Wadehra, Roula Nassif, Stefan Vlaski
Classical paradigms for distributed learning, such as federated or decentralized gradient descent, employ consensus mechanisms to enforce homogeneity among agents.
no code implementations • 23 Mar 2023 • Ying Cao, Elsa Rizk, Stefan Vlaski, Ali H. Sayed
The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years.
no code implementations • 3 Mar 2023 • Ying Cao, Elsa Rizk, Stefan Vlaski, Ali H. Sayed
This work focuses on adversarial learning over graphs.
no code implementations • 16 Jan 2023 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
We study the privatization of distributed learning and optimization strategies.
no code implementations • 5 Dec 2022 • Mert Kayaalp, Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed
This work studies networked agents cooperating to track a dynamical state of nature under partial information.
no code implementations • 26 Oct 2022 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
We study the generation of dependent random numbers in a distributed fashion in order to enable privatized distributed learning by networked agents.
no code implementations • 25 Oct 2022 • Stefan Vlaski, Soummya Kar, Ali H. Sayed, José M. F. Moura
Moreover, and significantly, theory and applications show that networked agents, through cooperation and sharing, are able to match the performance of cloud or federated solutions, while offering the potential for improved privacy, increasing resilience, and saving resources.
no code implementations • 16 Sep 2022 • Roula Nassif, Stefan Vlaski, Marco Carpentiero, Vincenzo Matta, Marc Antonini, Ali H. Sayed
In this paper, we consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces.
no code implementations • 1 Apr 2022 • Stefan Vlaski, Christian Schroth, Michael Muma, Abdelhak M. Zoubir
This is followed by an aggregation step, which traditionally takes the form of a (weighted) average.
no code implementations • 18 Mar 2022 • Roula Nassif, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed
Observations collected by agents in a network may be unreliable due to observation noise or interference.
no code implementations • 14 Mar 2022 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
Federated learning is a semi-distributed algorithm, where a server communicates with multiple dispersed clients to learn a global model.
no code implementations • 14 Mar 2022 • Ping Hu, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed
Adaptive social learning is a useful tool for studying distributed decision-making problems over graphs.
no code implementations • 14 Mar 2022 • Valentina Shumovskaia, Konstantinos Ntemos, Stefan Vlaski, Ali H. Sayed
Social learning algorithms provide models for the formation of opinions over social networks resulting from local reasoning and peer-to-peer exchanges.
no code implementations • 11 Mar 2022 • Valentina Shumovskaia, Konstantinos Ntemos, Stefan Vlaski, Ali H. Sayed
For a given graph topology, these algorithms allow for the prediction of formed opinions.
1 code implementation • 17 Dec 2021 • Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed
In the proposed social machine learning (SML) strategy, two phases are present: in the training phase, classifiers are independently trained to generate a belief over a set of hypotheses using a finite number of training samples; in the prediction phase, classifiers evaluate streaming unlabeled observations and share their instantaneous beliefs with neighboring classifiers.
no code implementations • 26 Nov 2021 • Mert Kayaalp, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed
This work proposes a multi-agent filtering algorithm over graphs for finite-state hidden Markov models (HMMs), which can be used for sequential state estimation or for tracking opinion formation over dynamic social networks.
no code implementations • 29 Mar 2021 • Stefan Vlaski, Ali H. Sayed
Adaptive networks have the capability to pursue solutions of global stochastic optimization problems by relying only on local interactions within neighborhoods.
no code implementations • 26 Mar 2021 • Konstantinos Ntemos, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed
Then, we will explain that it is possible for such attacks to succeed by showing that strategies exist that can be adopted by the malicious agents for this purpose.
no code implementations • 14 Dec 2020 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
Federated learning encapsulates distributed learning strategies that are managed by a central unit.
no code implementations • 2 Dec 2020 • Stefan Vlaski, Elsa Rizk, Ali H. Sayed
Federated learning is a useful framework for centralized learning from distributed data under practical considerations of heterogeneity, asynchrony, and privacy.
no code implementations • 26 Oct 2020 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
Federated learning involves a mixture of centralized and decentralized processing tasks, where a server regularly selects a sample of the agents and these in turn sample their local data to compute stochastic gradients for their learning updates.
no code implementations • 26 Oct 2020 • Konstantinos Ntemos, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed
A common assumption in the social learning literature is that agents exchange information in an unselfish manner.
no code implementations • 23 Oct 2020 • Stefan Vlaski, Ali H. Sayed
Decentralized algorithms for stochastic optimization and learning rely on the diffusion of information as a result of repeated local exchanges of intermediate estimates.
no code implementations • 23 Oct 2020 • Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed
Combination over time means that the classifiers respond to streaming data during testing and continue to improve their performance even during this phase.
no code implementations • 6 Oct 2020 • Mert Kayaalp, Stefan Vlaski, Ali H. Sayed
The formalism of meta-learning is actually well-suited to this decentralized setting, where the learner would be able to benefit from information and computational power spread across the agents.
no code implementations • 4 Apr 2020 • Stefan Vlaski, Elsa Rizk, Ali H. Sayed
The utilization of online stochastic algorithms is popular in large-scale learning settings due to their ability to compute updates on the fly, without the need to store and process data in large batches.
no code implementations • 31 Mar 2020 • Stefan Vlaski, Ali H. Sayed
Rapid advances in data collection and processing capabilities have allowed for the use of increasingly complex models that give rise to nonconvex optimization problems.
no code implementations • 20 Feb 2020 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
no code implementations • 7 Jan 2020 • Roula Nassif, Stefan Vlaski, Cedric Richard, Jie Chen, Ali H. Sayed
Multitask learning is an approach to inductive transfer learning (using what is learned for one problem to assist in another problem) and helps improve generalization performance relative to learning each task separately by using the domain information contained in the training signals of related tasks as an inductive bias.
no code implementations • 30 Oct 2019 • Stefan Vlaski, Ali H. Sayed
Under appropriate cooperation protocols and parameter choices, fully decentralized solutions for stochastic optimization have been shown to match the performance of centralized solutions and result in linear speedup (in the number of agents) relative to non-cooperative approaches in the strongly-convex setting.
no code implementations • 20 Sep 2019 • Stefan Vlaski, Lieven Vandenberghe, Ali H. Sayed
The purpose of this work is to develop and study a distributed strategy for Pareto optimization of an aggregate cost consisting of regularized risks.
no code implementations • 19 Aug 2019 • Stefan Vlaski, Ali H. Sayed
Recent years have seen increased interest in performance guarantees of gradient descent algorithms for non-convex optimization.
no code implementations • 3 Jul 2019 • Stefan Vlaski, Ali H. Sayed
In Part I [2] of this work we established that agents cluster around a network centroid and proceeded to study the dynamics of this point.
no code implementations • 21 Mar 2018 • Bicheng Ying, Kun Yuan, Stefan Vlaski, Ali H. Sayed
In empirical risk optimization, it has been observed that stochastic gradient implementations that rely on random reshuffling of the data achieve better performance than implementations that rely on sampling the data uniformly.