1 code implementation • 5 Dec 2023 • Sahil Tyagi, Martin Swany
Gradient compression alleviates expensive communication in distributed deep learning by sending fewer values and its corresponding indices, typically via Allgather (AG).
1 code implementation • 16 Jul 2023 • Sahil Tyagi, Martin Swany
In distributed training, deep neural networks (DNNs) are launched over multiple workers concurrently and aggregate their local updates on each step in bulk-synchronous parallel (BSP) training.
1 code implementation • 20 May 2023 • Sahil Tyagi, Martin Swany
Distributed data-parallel (DDP) training improves overall application throughput as multiple devices train on a subset of data and aggregate updates to produce a globally shared model.
no code implementations • 16 Feb 2023 • Cheng Chu, Lei Jiang, Martin Swany, Fan Chen
We propose a circuit-level backdoor attack, \textit{QTrojan}, against Quantum Neural Networks (QNNs) in this paper.
no code implementations • 14 Feb 2023 • Malintha Fernando, Ransalu Senanayake, Heeyoul Choi, Martin Swany
Autonomous mobility is emerging as a new disruptive mode of urban transportation for moving cargo and passengers.
1 code implementation • 21 Jan 2023 • Sahil Tyagi, Martin Swany
In this paper, we introduce ScaDLES to efficiently train on streaming data at the edge in an online fashion, while also addressing the challenges of limited bandwidth and training with non-IID data.
no code implementations • 8 Nov 2021 • Malintha Fernando, Ransalu Senanayake, Martin Swany
We propose a novel framework for real-time communication-aware coverage control in networked robot swarms.