Search Results for author: Fuxiang Zhang

Found 3 papers, 2 papers with code

Disentangling Policy from Offline Task Representation Learning via Adversarial Data Augmentation

1 code implementation12 Mar 2024 Chengxing Jia, Fuxiang Zhang, Yi-Chen Li, Chen-Xiao Gao, Xu-Hui Liu, Lei Yuan, Zongzhang Zhang, Yang Yu

Specifically, the objective of adversarial data augmentation is not merely to generate data analogous to offline data distribution; instead, it aims to create adversarial examples designed to confound learned task representations and lead to incorrect task identification.

Contrastive Learning Data Augmentation +3

Policy Regularization with Dataset Constraint for Offline Reinforcement Learning

2 code implementations11 Jun 2023 Yuhang Ran, Yi-Chen Li, Fuxiang Zhang, Zongzhang Zhang, Yang Yu

A common taxonomy of existing offline RL works is policy regularization, which typically constrains the learned policy by distribution or support of the behavior policy.

Offline RL reinforcement-learning +1

Multi-agent Continual Coordination via Progressive Task Contextualization

no code implementations7 May 2023 Lei Yuan, Lihe Li, Ziqian Zhang, Fuxiang Zhang, Cong Guan, Yang Yu

Towards tackling the mentioned issue, this paper proposes an approach Multi-Agent Continual Coordination via Progressive Task Contextualization, dubbed MACPro.

Continual Learning Multi-agent Reinforcement Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.