Search Results for author: Holger Karl

Found 13 papers, 6 papers with code

Stability and Convergence of Distributed Stochastic Approximations with large Unbounded Stochastic Information Delays

no code implementations11 May 2023 Adrian Redder, Arunselvan Ramaswamy, Holger Karl

We generalize the Borkar-Meyn stability Theorem (BMT) to distributed stochastic approximations (SAs) with information delays that possess an arbitrary moment bound.

Multi-Agent Reinforcement Learning for Long-Term Network Resource Allocation through Auction: a V2X Application

no code implementations29 Jul 2022 Jing Tan, Ramin Khalili, Holger Karl, Artur Hecker

We formulate offloading of computational tasks from a dynamic group of mobile agents (e. g., cars) as decentralized decision making among autonomous agents.

Decision Making Fairness +1

Learning to Bid Long-Term: Multi-Agent Reinforcement Learning with Long-Term and Sparse Reward in Repeated Auction Games

1 code implementation5 Apr 2022 Jing Tan, Ramin Khalili, Holger Karl

We propose a multi-agent distributed reinforcement learning algorithm that balances between potentially conflicting short-term reward and sparse, delayed long-term reward, and learns with partial information in a dynamic environment.

Multi-agent Reinforcement Learning reinforcement-learning +1

Distributed gradient-based optimization in the presence of dependent aperiodic communication

no code implementations27 Jan 2022 Adrian Redder, Arunselvan Ramaswamy, Holger Karl

We show: If for any $p \ge0$ the processes that describe the success of communication between agents in a SSC network are $\alpha$-mixing with $n^{p-1}\alpha(n)$ summable, then the associated AoI processes are stochastically dominated by a random variable with finite $p$-th moment.

Distributed Optimization

3DPG: Distributed Deep Deterministic Policy Gradient Algorithms for Networked Multi-Agent Systems

no code implementations3 Jan 2022 Adrian Redder, Arunselvan Ramaswamy, Holger Karl

We prove the asymptotic convergence of 3DPG even in the presence of potentially unbounded Age of Information (AoI).

Reinforcement Learning for Admission Control in Wireless Virtual Network Embedding

no code implementations4 Oct 2021 Haitham Afifi, Fabian Sauer, Holger Karl

Using Service Function Chaining (SFC) in wireless networks became popular in many domains like networking and multimedia.

Network Embedding reinforcement-learning +1

Self-Driving Network and Service Coordination Using Deep Reinforcement Learning

1 code implementation2 Nov 2020 Stefan Schneider, Adnan Manzoor, Haydar Qarawlus, Rafael Schellenberg, Holger Karl, Ramin Khalili, Artur Hecker

While this typically works well for the considered scenario, the models often rely on unrealistic assumptions or on knowledge that is not available in practice (e. g., a priori knowledge).

reinforcement-learning Reinforcement Learning (RL)

Machine Learning for Dynamic Resource Allocation in Network Function Virtualization

1 code implementation12 Aug 2020 Stefan Schneider, Narayanan Puthenpurayil Satheeschandran, Manuel Peuster, Holger Karl

To solve this problem, we train machine learning models on real VNF data, containing measurements of performance and resource requirements.

BIG-bench Machine Learning Model Selection

The Softwarised Network Data Zoo

1 code implementation21 Oct 2019 Manuel Peuster, Stefan Schneider, Holger Karl

To this end, we introduce the "softwarised network data zoo" (SNDZoo), an open collection of software networking data sets aiming to streamline and ease machine learning research in the software networking domain.

BIG-bench Machine Learning Management

DeepCAS: A Deep Reinforcement Learning Algorithm for Control-Aware Scheduling

no code implementations8 Mar 2018 Burak Demirel, Arunselvan Ramaswamy, Daniel E. Quevedo, Holger Karl

The main contribution of this paper is to develop a deep reinforcement learning-based \emph{control-aware} scheduling (\textsc{DeepCAS}) algorithm to tackle these issues.

reinforcement-learning Reinforcement Learning (RL) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.