no code implementations • 26 Mar 2024 • Samuel Li, Sarthak Bhagat, Joseph Campbell, Yaqi Xie, Woojun Kim, Katia Sycara, Simon Stepputtis
Task-oriented grasping of unfamiliar objects is a necessary skill for robots in dynamic in-home environments.
no code implementations • 3 Feb 2024 • Jeonghye Kim, Suyoung Lee, Woojun Kim, Youngchul Sung
Offline reinforcement learning (RL) has seen notable advancements through return-conditioned supervised learning (RCSL) and value-based methods, yet each approach comes with its own set of practical challenges.
no code implementations • NeurIPS 2023 • Sungho Choi, Seungyul Han, Woojun Kim, Jongseong Chae, Whiyoung Jung, Youngchul Sung
In this paper, we consider domain-adaptive imitation learning with visual observation, where an agent in a target domain learns to perform a task by observing expert demonstrations in a source domain.
1 code implementation • 5 Oct 2023 • Woojun Kim, Jeonghye Kim, Youngchul Sung
In this paper, a unified framework for exploration in reinforcement learning (RL) is proposed based on an option-critic model.
no code implementations • 4 Oct 2023 • Jeonghye Kim, Suyoung Lee, Woojun Kim, Youngchul Sung
However, we discovered that the attention module of DT is not appropriate to capture the inherent local dependence pattern in trajectories of RL modeled as a Markov decision process.
no code implementations • 2 Mar 2023 • Woojun Kim, Youngchul Sung
Handling the problem of scalability is one of the essential issues for multi-agent reinforcement learning (MARL) algorithms to be applied to real-world problems typically involving massively many agents.
no code implementations • 1 Mar 2023 • Woojun Kim, Whiyoung Jung, Myungsik Cho, Youngchul Sung
In this paper, we propose a new mutual information framework for multi-agent reinforcement learning to enable multiple agents to learn coordinated behaviors by regularizing the accumulated return with the simultaneous mutual information between multi-agent actions.
Multi-agent Reinforcement Learning reinforcement-learning +1
1 code implementation • 20 Jun 2022 • Jeewon Jeon, Woojun Kim, Whiyoung Jung, Youngchul Sung
In this paper, we consider cooperative multi-agent reinforcement learning (MARL) with sparse reward.
no code implementations • ICLR 2021 • Woojun Kim, Jongeui Park, Youngchul Sung
Communication is one of the core components for learning coordinated behavior in multi-agent systems.
Multi-agent Reinforcement Learning reinforcement-learning +1
no code implementations • 4 Jun 2020 • Woojun Kim, Whiyoung Jung, Myungsik Cho, Youngchul Sung
In this paper, we propose a maximum mutual information (MMI) framework for multi-agent reinforcement learning (MARL) to enable multiple agents to learn coordinated behaviors by regularizing the accumulated return with the mutual information between actions.
Multiagent Systems
no code implementations • 2 Jun 2020 • Sungho Choi, Seungyul Han, Woojun Kim, Youngchul Sung
In this paper, we consider cross-domain imitation learning (CDIL) in which an agent in a target domain learns a policy to perform well in the target domain by observing expert demonstrations in a source domain without accessing any reward function.
no code implementations • 18 Feb 2019 • Woojun Kim, Myungsik Cho, Youngchul Sung
In this paper, we propose a new learning technique named message-dropout to improve the performance for multi-agent deep reinforcement learning under two application scenarios: 1) classical multi-agent reinforcement learning with direct message communication among agents and 2) centralized training with decentralized execution.
Multi-agent Reinforcement Learning reinforcement-learning +1