Search Results for author: Woojun Kim

Found 12 papers, 2 papers with code

Value-Aided Conditional Supervised Learning for Offline RL

no code implementations3 Feb 2024 Jeonghye Kim, Suyoung Lee, Woojun Kim, Youngchul Sung

Offline reinforcement learning (RL) has seen notable advancements through return-conditioned supervised learning (RCSL) and value-based methods, yet each approach comes with its own set of practical challenges.

Offline RL Reinforcement Learning (RL)

Domain Adaptive Imitation Learning with Visual Observation

no code implementations NeurIPS 2023 Sungho Choi, Seungyul Han, Woojun Kim, Jongseong Chae, Whiyoung Jung, Youngchul Sung

In this paper, we consider domain-adaptive imitation learning with visual observation, where an agent in a target domain learns to perform a task by observing expert demonstrations in a source domain.

Image Reconstruction Imitation Learning

Decision ConvFormer: Local Filtering in MetaFormer is Sufficient for Decision Making

no code implementations4 Oct 2023 Jeonghye Kim, Suyoung Lee, Woojun Kim, Youngchul Sung

However, we discovered that the attention module of DT is not appropriate to capture the inherent local dependence pattern in trajectories of RL modeled as a Markov decision process.

Decision Making Reinforcement Learning (RL)

Parameter Sharing with Network Pruning for Scalable Multi-Agent Deep Reinforcement Learning

no code implementations2 Mar 2023 Woojun Kim, Youngchul Sung

Handling the problem of scalability is one of the essential issues for multi-agent reinforcement learning (MARL) algorithms to be applied to real-world problems typically involving massively many agents.

Multi-agent Reinforcement Learning Network Pruning +2

A Variational Approach to Mutual Information-Based Coordination for Multi-Agent Reinforcement Learning

no code implementations1 Mar 2023 Woojun Kim, Whiyoung Jung, Myungsik Cho, Youngchul Sung

In this paper, we propose a new mutual information framework for multi-agent reinforcement learning to enable multiple agents to learn coordinated behaviors by regularizing the accumulated return with the simultaneous mutual information between multi-agent actions.

Multi-agent Reinforcement Learning reinforcement-learning +1

A Maximum Mutual Information Framework for Multi-Agent Reinforcement Learning

no code implementations4 Jun 2020 Woojun Kim, Whiyoung Jung, Myungsik Cho, Youngchul Sung

In this paper, we propose a maximum mutual information (MMI) framework for multi-agent reinforcement learning (MARL) to enable multiple agents to learn coordinated behaviors by regularizing the accumulated return with the mutual information between actions.

Multiagent Systems

Cross-Domain Imitation Learning with a Dual Structure

no code implementations2 Jun 2020 Sungho Choi, Seungyul Han, Woojun Kim, Youngchul Sung

In this paper, we consider cross-domain imitation learning (CDIL) in which an agent in a target domain learns a policy to perform well in the target domain by observing expert demonstrations in a source domain without accessing any reward function.

Imitation Learning

Message-Dropout: An Efficient Training Method for Multi-Agent Deep Reinforcement Learning

no code implementations18 Feb 2019 Woojun Kim, Myungsik Cho, Youngchul Sung

In this paper, we propose a new learning technique named message-dropout to improve the performance for multi-agent deep reinforcement learning under two application scenarios: 1) classical multi-agent reinforcement learning with direct message communication among agents and 2) centralized training with decentralized execution.

Multi-agent Reinforcement Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.