no code implementations • 22 Apr 2024 • David R. Nickel, Anindya Bijoy Das, David J. Love, Christopher G. Brinton
In CRNs, both spectrum sensing and resource allocation (SSRA) are critical to maximizing system throughput while minimizing collisions of secondary users with the primary network.
no code implementations • 21 Apr 2024 • Myeung Suk Oh, Anindya Bijoy Das, Taejoon Kim, David J. Love, Christopher G. Brinton
In this work, we design a novel positioning neural network (P-NN) that utilizes the minimum description features to substantially reduce the complexity of deep learning-based WP.
no code implementations • 15 Apr 2024 • Satyavrat Wagle, Seyyedali Hosseinalipour, Naji Khosravan, Christopher G. Brinton
Specifically, we introduce a \textit{smart information push-pull} methodology for data/embedding exchange tailored to FL settings with either soft or strict data privacy restrictions.
no code implementations • 9 Apr 2024 • Guangchen Lan, Dong-Jun Han, Abolfazl Hashemi, Vaneet Aggarwal, Christopher G. Brinton
Moreover, compared to synchronous FedPG, AFedPG improves the time complexity from $\mathcal{O}(\frac{t_{\max}}{N})$ to $\mathcal{O}(\frac{1}{\sum_{i=1}^{N} \frac{1}{t_{i}}})$, where $t_{i}$ denotes the time consumption in each iteration at the agent $i$, and $t_{\max}$ is the largest one.
no code implementations • 15 Feb 2024 • Seohyun Lee, Anindya Bijoy Das, Satyavrat Wagle, Christopher G. Brinton
Numerical analysis shows the advantages in terms of convergence speed and straggler resilience of the proposed method to different available FL schemes and benchmark datasets.
no code implementations • 14 Feb 2024 • Myeung Suk Oh, Anindya Bijoy Das, Taejoon Kim, David J. Love, Christopher G. Brinton
A recent line of research has been investigating deep learning approaches to wireless positioning (WP).
1 code implementation • 5 Feb 2024 • Shahryar Zehtabi, Dong-Jun Han, Rohit Parasnis, Seyyedali Hosseinalipour, Christopher G. Brinton
Decentralized Federated Learning (DFL) has received significant recent research attention, capturing settings where both model updates and model aggregations -- the two key FL processes -- are conducted by the clients.
no code implementations • 3 Feb 2024 • Yun-Wei Chu, Dong-Jun Han, Seyyedali Hosseinalipour, Christopher G. Brinton
Most existing federated learning (FL) methodologies have assumed training begins from a randomly initialized model.
no code implementations • 30 Jan 2024 • Liangqi Yuan, Dong-Jun Han, Su Wang, Devesh Upadhyay, Christopher G. Brinton
Multimodal federated learning (FL) aims to enrich model training in FL settings where clients are collecting measurements across multiple modalities.
no code implementations • 15 Jan 2024 • Yun-Wei Chu, Dong-Jun Han, Christopher G. Brinton
Federated learning (FL) is a promising approach for solving multilingual tasks, potentially enabling clients with their own language-specific data to collaboratively construct a high-quality neural machine translation (NMT) model.
no code implementations • 31 Dec 2023 • JungHoon Kim, Taejoon Kim, Anindya Bijoy Das, Seyyedali Hosseinalipour, David J. Love, Christopher G. Brinton
In this work, we aim to enhance and balance the communication reliability in GTWCs by minimizing the sum of error probabilities via joint design of encoders and decoders at the users.
no code implementations • 27 Dec 2023 • Surojit Ganguli, Zeyu Zhou, Christopher G. Brinton, David I. Inouye
Vertical Federated learning (VFL) is a class of FL where each client shares the same sample space but only holds a subset of the features.
no code implementations • 23 Dec 2023 • Dong-Jun Han, Seyyedali Hosseinalipour, David J. Love, Mung Chiang, Christopher G. Brinton
While network coverage maps continue to expand, many devices located in remote areas remain unconnected to terrestrial communication infrastructures, preventing them from getting access to the associated data-driven services.
1 code implementation • 14 Nov 2023 • Adam Piaseczny, Eric Ruzomberka, Rohit Parasnis, Christopher G. Brinton
This paper addresses this gap by analyzing the performance of decentralized FL for various adversarial placement strategies when adversaries can jointly coordinate their placement within a network.
no code implementations • 7 Nov 2023 • Su Wang, Roberto Morabito, Seyyedali Hosseinalipour, Mung Chiang, Christopher G. Brinton
Our optimization methodology aims to select the best combination of sampled nodes and data offloading configuration to maximize FedL training accuracy while minimizing data processing and D2D communication resource consumption subject to realistic constraints on the network topology and device capabilities.
no code implementations • 27 Oct 2023 • Wenzhi Fang, Dong-Jun Han, Christopher G. Brinton
Hierarchical federated learning (HFL) has demonstrated promising scalability advantages over the traditional "star-topology" architecture-based federated learning (FL).
no code implementations • 16 Oct 2023 • Byunghyun Lee, Anindya Bijoy Das, David J. Love, Christopher G. Brinton, James V. Krogmeier
Dual-functional radar-communication (DFRC) is a promising technology where radar and communication functions operate on the same spectrum and hardware.
no code implementations • 10 Oct 2023 • Liangqi Yuan, Dong-Jun Han, Vishnu Pandi Chellapandi, Stanislaw H. Żak, Christopher G. Brinton
Multimodal federated learning (FL) aims to enrich model training in FL settings where devices are collecting measurements across multiple modalities (e. g., sensors measuring pressure, motion, and other types of data).
no code implementations • 4 Oct 2023 • Liangqi Yuan, Ziran Wang, Christopher G. Brinton
The Internet of Things (IoT) consistently generates vast amounts of data, sparking increasing concern over the protection of data privacy and the limitation of data misuse.
no code implementations • 21 Aug 2023 • Vishnu Pandi Chellapandi, Liangqi Yuan, Christopher G. Brinton, Stanislaw H Zak, Ziran Wang
This survey paper presents a review of the advancements made in the application of FL for CAV (FL4CAV).
no code implementations • 7 Aug 2023 • Satyavrat Wagle, Anindya Bijoy Das, David J. Love, Christopher G. Brinton
Augmenting federated learning (FL) with direct device-to-device (D2D) communications can help improve convergence speed and reduce model bias through rapid local information exchange.
no code implementations • 20 Jul 2023 • Yongjeong Oh, Jaeho Lee, Christopher G. Brinton, Yo-Seb Jeon
In the second strategy, the non-dropped intermediate feature and gradient vectors are quantized using adaptive quantization levels determined based on the ranges of the vectors.
no code implementations • 8 Jun 2023 • Su Wang, Rajeev Sahay, Adam Piaseczny, Christopher G. Brinton
In this work, we first reveal the susceptibility of FL-based signal classifiers to model poisoning attacks, which compromise the training process despite not observing data transmissions.
no code implementations • 2 Jun 2023 • Liangqi Yuan, Ziran Wang, Lichao Sun, Philip S. Yu, Christopher G. Brinton
Federated learning (FL) has been gaining attention for its ability to share knowledge while maintaining user data, protecting privacy, increasing learning efficiency, and reducing communication overhead.
no code implementations • 22 May 2023 • Zhan-Lun Chang, Seyyedali Hosseinalipour, Mung Chiang, Christopher G. Brinton
Our analysis sheds light on the joint impact of device training variables (e. g., number of local gradient descent steps), asynchronous scheduling decisions (i. e., when a device trains a task), and dynamic data drifts on the performance of ML training for different tasks.
no code implementations • 30 Apr 2023 • Myeung Suk Oh, Seyyedali Hosseinalipour, Taejoon Kim, David J. Love, James V. Krogmeier, Christopher G. Brinton
For dynamic sensor selection, two greedy selection strategies are proposed, each of which exploits properties revealed in the derived CRLB expressions.
no code implementations • 24 Apr 2023 • Su Wang, Seyyedali Hosseinalipour, Christopher G. Brinton
Our methodology, Source-Target Determination and Link Formation (ST-LF), optimizes both (i) classification of devices into sources and targets and (ii) source-target link formation, in a manner that considers the trade-off between ML model accuracy and communication energy efficiency.
no code implementations • 20 Apr 2023 • Boris Velasevic, Rohit Parasnis, Christopher G. Brinton, Navid Azizan
Using this notion, we bound and compare the convergence rates of the studied algorithms and capture the effects of both cross-machine and local data heterogeneity on these quantities.
no code implementations • 15 Mar 2023 • Su Wang, Seyyedali Hosseinalipour, Vaneet Aggarwal, Christopher G. Brinton, David J. Love, Weifeng Su, Mung Chiang
Federated learning (FL) has been promoted as a popular technique for training machine learning (ML) models over edge/fog networks.
no code implementations • 1 Mar 2023 • Eric Ruzomberka, David J. Love, Christopher G. Brinton, Arpit Gupta, Chih-Chun Wang, H. Vincent Poor
The demand for broadband wireless access is driving research and standardization of 5G and beyond-5G wireless systems.
no code implementations • 23 Feb 2023 • Anindya Bijoy Das, Aditya Ramamoorthy, David J. Love, Christopher G. Brinton
Federated learning (FL) is a popular technique for training a global model on data distributed across client devices.
no code implementations • 4 Feb 2023 • Sihua Wang, Mingzhe Chen, Cong Shen, Changchuan Yin, Christopher G. Brinton
The PS, acting as a central controller, generates a global FL model using the received local FL models and broadcasts it back to all devices.
no code implementations • 21 Jan 2023 • Su Wang, Rajeev Sahay, Christopher G. Brinton
In this work, we reveal the susceptibility of FL-based signal classifiers to model poisoning attacks, which compromise the training process despite not observing data transmissions.
no code implementations • 12 Jan 2023 • Myeung Suk Oh, Anindya Bijoy Das, Seyyedali Hosseinalipour, Taejoon Kim, David J. Love, Christopher G. Brinton
Radio access networks (RANs) in monolithic architectures have limited adaptability to supporting different network scenarios.
no code implementations • 16 Dec 2022 • Dong-Jun Han, Do-Yeon Kim, Minseok Choi, Christopher G. Brinton, Jaekyun Moon
A fundamental challenge to providing edge-AI services is the need for a machine learning (ML) model that achieves personalization (i. e., to individual clients) and generalization (i. e., to unseen data) properties concurrently.
1 code implementation • 28 Nov 2022 • Rajeev Sahay, Minjun Zhang, David J. Love, Christopher G. Brinton
Recent work has advocated for the use of deep learning to perform power allocation in the downlink of massive MIMO (maMIMO) networks.
no code implementations • 23 Nov 2022 • Shahryar Zehtabi, Seyyedali Hosseinalipour, Christopher G. Brinton
We theoretically demonstrate that our methodology converges to the globally optimal learning model at a $O{(\frac{\ln{k}}{\sqrt{k}})}$ rate under standard assumptions in distributed learning and consensus literature.
no code implementations • 21 Sep 2022 • Sihua Wang, Mingzhe Chen, Christopher G. Brinton, Changchuan Yin, Walid Saad, Shuguang Cui
Compared to model-free RL, this model-based RL approach leverages the derived mathematical characterization of the FL training process to discover an effective device selection and quantization scheme without imposing additional device communication overhead.
no code implementations • 4 Aug 2022 • Satyavrat Wagle, Seyyedali Hosseinalipour, Naji Khosravan, Mung Chiang, Christopher G. Brinton
In most of the current literature, FL has been studied for supervised ML tasks, in which edge devices collect labeled data.
no code implementations • 15 Jun 2022 • Rajeev Sahay, Swaroop Appadwedula, David J. Love, Christopher G. Brinton
Many communications and sensing applications hinge on the detection of a signal in a noisy, interference-heavy environment.
no code implementations • 21 May 2022 • Jing Guo, Raghu G. Raj, David J. Love, Christopher G. Brinton
Moreover, we are interested in sparse sensor selection using a marginalized weighted kernel approach to improve network resource efficiency by disabling less reliable sensors with minimal effect on classification performance. To achieve our goals, we develop a multi-sensor online kernel scalar quantization (MSOKSQ) learning strategy that operates on the sensor outputs at the fusion center.
no code implementations • 7 May 2022 • JungHoon Kim, Seyyedali Hosseinalipour, Andrew C. Marcum, Taejoon Kim, David J. Love, Christopher G. Brinton
Intelligent reflecting surfaces (IRS) consist of configurable meta-atoms, which can alter the wireless propagation environment through design of their reflection coefficients.
1 code implementation • 7 Apr 2022 • Shahryar Zehtabi, Seyyedali Hosseinalipour, Christopher G. Brinton
Through theoretical analysis, we demonstrate that our methodology achieves asymptotic convergence to the globally optimal learning model under standard assumptions in distributed learning and graph consensus literature, and without restrictive connectivity requirements on the underlying topology.
no code implementations • 26 Mar 2022 • Bhargav Ganguly, Seyyedali Hosseinalipour, Kwang Taik Kim, Christopher G. Brinton, Vaneet Aggarwal, David J. Love, Mung Chiang
CE-FL also introduces floating aggregation point, where the local models generated at the devices and the servers are aggregated at an edge server, which varies from one model training round to another to cope with the network evolution in terms of data distribution and users' mobility.
no code implementations • 18 Mar 2022 • Dinh C. Nguyen, Seyyedali Hosseinalipour, David J. Love, Pubudu N. Pathirana, Christopher G. Brinton
To assist the ML model training for resource-constrained MDs, we develop an offloading strategy that enables MDs to transmit their data to one of the associated ESs.
no code implementations • 7 Feb 2022 • Seyyedali Hosseinalipour, Su Wang, Nicolo Michelusi, Vaneet Aggarwal, Christopher G. Brinton, David J. Love, Mung Chiang
PSL considers the realistic scenario where global aggregations are conducted with idle times in-between them for resource efficiency improvements, and incorporates data dispersion and model dispersion with local model condensation into FedL.
no code implementations • 27 Dec 2021 • David Nickel, Frank Po-Chen Lin, Seyyedali Hosseinalipour, Nicolo Michelusi, Christopher G. Brinton
Federated learning (FL) has emerged as a popular technique for distributing machine learning across wireless edge devices.
no code implementations • 3 Dec 2021 • JungHoon Kim, Seyyedali Hosseinalipour, Andrew C. Marcum, Taejoon Kim, David J. Love, Christopher G. Brinton
We consider a practical setting where (i) the IRS reflection coefficients are achieved by adjusting tunable elements embedded in the meta-atoms, (ii) the IRS reflection coefficients are affected by the incident angles of the incoming signals, (iii) the IRS is deployed in multi-path, time-varying channels, and (iv) the feedback link from the base station to the IRS has a low data rate.
no code implementations • 28 Oct 2021 • Yun-Wei Chu, Elizabeth Tenorio, Laura Cruz, Kerrie Douglas, Andrew S. Lan, Christopher G. Brinton
Our methodology for predicting in-video quiz performance is based on three key ideas we develop.
1 code implementation • 7 Sep 2021 • Frank Po-Chen Lin, Seyyedali Hosseinalipour, Sheikh Shams Azam, Christopher G. Brinton, Nicolò Michelusi
Federated learning has emerged as a popular technique for distributing model training across the network edge.
no code implementations • 29 Jun 2021 • Su Wang, Seyyedali Hosseinalipour, Maria Gorlatova, Christopher G. Brinton, Mung Chiang
The presence of time-varying data heterogeneity and computational resource inadequacy among device clusters motivate four key parts of our methodology: (i) stratified UAV swarms of leader, worker, and coordinator UAVs, (ii) hierarchical nested personalized federated learning (HN-PFL), a distributed ML framework for personalized model training across the worker-leader-core network hierarchy, (iii) cooperative UAV resource pooling to address computational inadequacy of devices by conducting model training among the UAV swarms, and (iv) model/concept drift to model time-varying data distributions.
no code implementations • 8 Apr 2021 • Rajeev Sahay, Christopher G. Brinton, David J. Love
Furthermore, adversarial interference is transferable in black box environments, allowing an adversary to attack multiple deep learning models with a single perturbation crafted for a particular classification model.
1 code implementation • 18 Mar 2021 • Frank Po-Chen Lin, Seyyedali Hosseinalipour, Sheikh Shams Azam, Christopher G. Brinton, Nicolo Michelusi
Federated learning has emerged as a popular technique for distributing machine learning (ML) model training across the wireless edge.
no code implementations • 25 Jan 2021 • Myeung Suk Oh, Seyyedali Hosseinalipour, Taejoon Kim, Christopher G. Brinton, David J. Love
Our methodology includes a new successive channel denoising process based on channel curvature computation, for which we obtain a channel curvature magnitude threshold to identify unreliable channel estimates.
no code implementations • 4 Jan 2021 • Su Wang, Mengyuan Lee, Seyyedali Hosseinalipour, Roberto Morabito, Mung Chiang, Christopher G. Brinton
The conventional federated learning (FedL) architecture distributes machine learning (ML) across worker devices by having them train local models that are periodically aggregated by a server.
1 code implementation • 2 Dec 2020 • Adam Hare, Yu Chen, Yinan Liu, Zhenming Liu, Christopher G. Brinton
Despite the recent successes of deep learning in natural language processing (NLP), there remains widespread usage of and demand for techniques that do not rely on machine learning.
no code implementations • 2 Nov 2020 • Rajeev Sahay, Christopher G. Brinton, David J. Love
Automatic modulation classification (AMC) aims to improve the efficiency of crowded radio spectrums by automatically predicting the modulation constellation of wireless RF signals.
no code implementations • 2 Nov 2020 • JungHoon Kim, Seyyedali Hosseinalipour, Taejoon Kim, David J. Love, Christopher G. Brinton
Applications of intelligent reflecting surfaces (IRSs) in wireless networks have attracted significant attention recently.
no code implementations • 29 Sep 2020 • Mengyuan Lee, Seyyedali Hosseinalipour, Christopher G. Brinton, Guanding Yu, Huaiyu Dai
However, the problem of allocating items among the bidders to maximize the auctioneers" revenue, i. e., the winner determination problem (WDP), is NP-complete to solve and inapproximable.
no code implementations • 21 Aug 2020 • Frank Po-Chen Lin, Christopher G. Brinton, Nicolò Michelusi
Federated learning has received significant attention as a potential solution for distributing machine learning (ML) model training through edge networks.
no code implementations • 5 Aug 2020 • Qiong Wu, Adam Hare, Sirui Wang, Yuwei Tu, Zhenming Liu, Christopher G. Brinton, Yanhua Li
In this work, we reexamine the inter-related problems of "topic identification" and "text segmentation" for sparse document learning, when there is a single new text of interest.
no code implementations • 26 Jul 2020 • Hung T. Nguyen, Vikash Sehwag, Seyyedali Hosseinalipour, Christopher G. Brinton, Mung Chiang, H. Vincent Poor
In this paper, we propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training to optimize the expected convergence speed.
no code implementations • 25 Jul 2020 • JungHoon Kim, Taejoon Kim, Morteza Hashemi, Christopher G. Brinton, David J. Love
Device-to-device (D2D) communications is expected to be a critical enabler of distributed computing in edge networks at scale.
1 code implementation • 18 Jul 2020 • Seyyedali Hosseinalipour, Sheikh Shams Azam, Christopher G. Brinton, Nicolo Michelusi, Vaneet Aggarwal, David J. Love, Huaiyu Dai
We derive the upper bound of convergence for MH-FL with respect to parameters of the network topology (e. g., the spectral radius) and the learning algorithm (e. g., the number of D2D rounds in different clusters).
no code implementations • 7 Jun 2020 • Seyyedali Hosseinalipour, Christopher G. Brinton, Vaneet Aggarwal, Huaiyu Dai, Mung Chiang
There are several challenges with employing conventional federated learning in contemporary networks, due to the significant heterogeneity in compute and communication capabilities that exist across devices.
no code implementations • 17 Apr 2020 • Yuwei Tu, Yichen Ruan, Su Wang, Satyavrat Wagle, Christopher G. Brinton, Carlee Joe-Wong
Unlike traditional federated learning frameworks, our method enables devices to offload their data processing tasks to each other, with these decisions determined through a convex data transfer optimization problem that trades off costs associated with devices processing, offloading, and discarding data points.
Distributed, Parallel, and Cluster Computing
no code implementations • 27 Feb 2020 • JungHoon Kim, Taejoon Kim, Morteza Hashemi, Christopher G. Brinton, David J. Love
In this paper, unlike previous mobile edge computing (MEC) approaches, we propose a joint optimization of wireless MIMO signal design and network resource allocation to maximize energy efficiency.
Networking and Internet Architecture Signal Processing
no code implementations • 23 Jan 2020 • Yuwei Tu, WeiYu Chen, Christopher G. Brinton
The increasing popularity of e-learning has created demand for improving online education through techniques such as predictive analytics and content recommendations.
no code implementations • 7 Sep 2019 • Qiong Wu, Christopher G. Brinton, Zheng Zhang, Andrea Pizzoferrato, Zhenming Liu, Mihai Cucuringu
Pricing assets has attracted significant attention from the financial technology community.