Search Results for author: Maxime Bouton

Found 13 papers, 5 papers with code

Towards Addressing Training Data Scarcity Challenge in Emerging Radio Access Networks: A Survey and Framework

no code implementations24 Apr 2023 Haneya Naeem Qureshi, Usama Masood, Marvin Manalastas, Syed Muhammad Asad Zaidi, Hasan Farooq, Julien Forgeat, Maxime Bouton, Shruti Bothe, Per Karlsson, Ali Rizwan, Ali Imran

The extensive survey of training data scarcity addressing techniques combined with proposed framework to select a suitable technique for given type of data, can assist researchers and network operators in choosing appropriate methods to overcome the data scarcity challenge in leveraging AI to radio access network automation.

Few-Shot Learning Matrix Completion +1

Model Based Residual Policy Learning with Applications to Antenna Control

no code implementations16 Nov 2022 Viktor Eriksson Möllerstedt, Alessio Russo, Maxime Bouton

Non-differentiable controllers and rule-based policies are widely used for controlling real systems such as telecommunication networks and robots.

Reinforcement Learning (RL)

A Graph Attention Learning Approach to Antenna Tilt Optimization

no code implementations27 Dec 2021 Yifei Jin, Filippo Vannella, Maxime Bouton, Jaeseong Jeong, Ezeddin Al Hakim

GAQ relies on a graph attention mechanism to select relevant neighbors information, improve the agent state representation, and update the tilt control policy based on a history of observations using a Deep Q-Network (DQN).

Graph Attention Q-Learning +1

Coordinated Reinforcement Learning for Optimizing Mobile Networks

no code implementations30 Sep 2021 Maxime Bouton, Hasan Farooq, Julien Forgeat, Shruti Bothe, Meral Shirazipour, Per Karlsson

In this work, we demonstrate how to use coordination graphs and reinforcement learning in a complex application involving hundreds of cooperating agents.

Multi-agent Reinforcement Learning reinforcement-learning +1

Reinforcement Learning with Iterative Reasoning for Merging in Dense Traffic

no code implementations25 May 2020 Maxime Bouton, Alireza Nakhaei, David Isele, Kikuo Fujimura, Mykel J. Kochenderfer

This approach exposes the agent to a broad variety of behaviors during training, which promotes learning policies that are robust to model discrepancies.

Autonomous Vehicles reinforcement-learning +1

Point-Based Methods for Model Checking in Partially Observable Markov Decision Processes

1 code implementation11 Jan 2020 Maxime Bouton, Jana Tumova, Mykel J. Kochenderfer

Autonomous systems are often required to operate in partially observable environments.

Cooperation-Aware Reinforcement Learning for Merging in Dense Traffic

1 code implementation26 Jun 2019 Maxime Bouton, Alireza Nakhaei, Kikuo Fujimura, Mykel J. Kochenderfer

In this work, we present a reinforcement learning approach to learn how to interact with drivers with different cooperation levels.

Autonomous Vehicles Decision Making +3

Pedestrian Collision Avoidance System for Scenarios with Occlusions

1 code implementation25 Apr 2019 Markus Schratter, Maxime Bouton, Mykel J. Kochenderfer, Daniel Watzenig

We show that combining the two approaches provides a robust autonomous braking system that reduces unnecessary braking caused by using the AEB system on its own.

Autonomous Driving Collision Avoidance

Decomposition Methods with Deep Corrections for Reinforcement Learning

1 code implementation6 Feb 2018 Maxime Bouton, Kyle Julian, Alireza Nakhaei, Kikuo Fujimura, Mykel J. Kochenderfer

In contexts where an agent interacts with multiple entities, utility decomposition can be used to separate the global objective into local tasks considering each individual entity independently.

Autonomous Driving Decision Making +5

Belief State Planning for Autonomously Navigating Urban Intersections

no code implementations14 Apr 2017 Maxime Bouton, Akansel Cosgun, Mykel J. Kochenderfer

Urban intersections represent a complex environment for autonomous vehicles with many sources of uncertainty.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.