no code implementations • 13 Mar 2024 • Jing Tan, Ramin Khalili, Holger Karl
We test our algorithm in an ITS environment with edge cloud computing.
no code implementations • 15 Jan 2024 • Nikolaos Koursioumpas, Lina Magoula, Ioannis Stavrakakis, Nancy Alonistioti, M. A. Gutierrez-Estevez, Ramin Khalili
Alternative solutions have been surfaced (e. g. Split Learning, Federated Learning), distributing AI tasks of reduced complexity across nodes, while preserving the privacy of the data.
no code implementations • 21 Aug 2023 • Nikolaos Koursioumpas, Lina Magoula, Nikolaos Petropouleas, Alexandros-Ioannis Thanopoulos, Theodora Panagea, Nancy Alonistioti, M. A. Gutierrez-Estevez, Ramin Khalili
Progressing towards a new era of Artificial Intelligence (AI) - enabled wireless networks, concerns regarding the environmental impact of AI have been raised both in industry and academia.
no code implementations • 18 Jul 2023 • Kilian Pfeiffer, Martin Rapp, Ramin Khalili, Jörg Henkel
With an increasing number of smart devices like internet of things (IoT) devices deployed in the field, offloadingtraining of neural networks (NNs) to a central server becomes more and more infeasible.
no code implementations • 25 Jun 2023 • Lina Magoula, Nikolaos Koursioumpas, Alexandros-Ioannis Thanopoulos, Theodora Panagea, Nikolaos Petropouleas, M. A. Gutierrez-Estevez, Ramin Khalili
Federated Learning (FL) has emerged as a decentralized technique, where contrary to traditional centralized approaches, devices perform a model training in a collaborative manner, while preserving data privacy.
1 code implementation • NeurIPS 2023 • Kilian Pfeiffer, Ramin Khalili, Jörg Henkel
If the required memory to train a model exceeds this limit, the device will be excluded from the training.
1 code implementation • 24 May 2023 • Philipp Wiesner, Ramin Khalili, Dennis Grinwald, Pratik Agrawal, Lauritz Thamsen, Odej Kao
Federated Learning (FL) is an emerging machine learning technique that enables distributed model training across data silos or edge devices without data sharing.
no code implementations • 29 Jul 2022 • Jing Tan, Ramin Khalili, Holger Karl, Artur Hecker
We formulate offloading of computational tasks from a dynamic group of mobile agents (e. g., cars) as decentralized decision making among autonomous agents.
no code implementations • 13 Jul 2022 • Taylan Şahin, Ramin Khalili, Mate Boban, Adam Wolisz
To exploit the benefits of the centralized approach for enhancing the reliability of V2V communications on roads lacking cellular coverage, we propose VRLS (Vehicular Reinforcement Learning Scheduler), a centralized scheduler that proactively assigns resources for out-of-coverage V2V communications \textit{before} vehicles leave the cellular network coverage.
1 code implementation • 5 Apr 2022 • Jing Tan, Ramin Khalili, Holger Karl, Artur Hecker
We formulate computation offloading as a decentralized decision-making problem with autonomous agents.
1 code implementation • 5 Apr 2022 • Jing Tan, Ramin Khalili, Holger Karl
We propose a multi-agent distributed reinforcement learning algorithm that balances between potentially conflicting short-term reward and sparse, delayed long-term reward, and learns with partial information in a dynamic environment.
Multi-agent Reinforcement Learning reinforcement-learning +1
1 code implementation • 10 Mar 2022 • Kilian Pfeiffer, Martin Rapp, Ramin Khalili, Jörg Henkel
To adapt to the devices' heterogeneous resources, CoCoFL freezes and quantizes selected layers, reducing communication, computation, and memory requirements, whereas other layers are still trained in full precision, enabling to reach a high accuracy.
no code implementations • 16 Dec 2021 • Martin Rapp, Ramin Khalili, Kilian Pfeiffer, Jörg Henkel
We study the problem of distributed training of neural networks (NNs) on devices with heterogeneous, limited, and time-varying availability of computational resources.
1 code implementation • 2 Nov 2020 • Stefan Schneider, Adnan Manzoor, Haydar Qarawlus, Rafael Schellenberg, Holger Karl, Ramin Khalili, Artur Hecker
While this typically works well for the considered scenario, the models often rely on unrealistic assumptions or on knowledge that is not available in practice (e. g., a priori knowledge).
no code implementations • 9 Jun 2020 • Martin Rapp, Ramin Khalili, Jörg Henkel
We consider a distributed system, consisting of a heterogeneous set of devices, ranging from low-end to high-end.
no code implementations • 22 Jul 2019 • Taylan Şahin, Ramin Khalili, Mate Boban, Adam Wolisz
VRLS is a unified reinforcement learning (RL) solution, wherein the learning agent, the state representation, and the reward provided to the agent are applicable to different vehicular environments of interest (in terms of vehicular density, resource configuration, and wireless channel conditions).
no code implementations • 29 Apr 2019 • Taylan Şahin, Ramin Khalili, Mate Boban, Adam Wolisz
Radio resources in vehicle-to-vehicle (V2V) communication can be scheduled either by a centralized scheduler residing in the network (e. g., a base station in case of cellular systems) or a distributed scheduler, where the resources are autonomously selected by the vehicles.