no code implementations • 31 Jan 2024 • Tim Tse, Isaac Chan, Zhitang Chen
In this work, we propose a novel algorithmic framework for data sharing and coordinated exploration for the purpose of learning more data-efficient and better performing policies under a concurrent reinforcement learning (CRL) setting.
no code implementations • 31 Jan 2024 • Tim Tse, Zhitang Chen, Shengyu Zhu, Yue Liu
To go about capturing these discrepancies between cause and effect remains to be a challenge and many current state-of-the-art algorithms propose to compare the norms of the kernel mean embeddings of the conditional distributions.
no code implementations • 22 Aug 2023 • Junlong Lyu, Zhitang Chen, Shoubo Feng
We provide the first convergence guarantees for the Consistency Models (CMs), a newly emerging type of one-step generative models that can generate comparable samples to those generated by Diffusion Models.
no code implementations • 9 Aug 2023 • Wenlong Lyu, Shoubo Hu, Jie Chuai, Zhitang Chen
Bayesian optimization (BO) is widely adopted in black-box optimization problems and it relies on a surrogate model to approximate the black-box response function.
no code implementations • 30 Jan 2023 • Junlong Lyu, Zhitang Chen, Wenlong Lyu, Jianye Hao
We proposed a new technique to accelerate sampling methods for solving difficult optimization problems.
no code implementations • 29 Dec 2022 • Mehrtash Mehrabi, Walid Masoudimansour, Yingxue Zhang, Jie Chuai, Zhitang Chen, Mark Coates, Jianye Hao, Yanhui Geng
This performance relies heavily on the configuration of the network parameters.
1 code implementation • 17 Jun 2022 • Xinwei Shen, Shengyu Zhu, Jiji Zhang, Shoubo Hu, Zhitang Chen
In this paper, we revisit the Greedy Equivalence Search (GES) algorithm, which is widely cited as a score-based algorithm for learning the MEC of the underlying causal structure.
no code implementations • CVPR 2022 • Ruoyu Wang, Mingyang Yi, Zhitang Chen, Shengyu Zhu
In this work, we obviate these assumptions and tackle the OOD problem without explicitly recovering the causal feature.
no code implementations • 17 Feb 2022 • Mengyue Yang, Xinyu Cai, Furui Liu, Xu Chen, Zhitang Chen, Jianye Hao, Jun Wang
It is evidence that representation learning can improve model's performance over multiple downstream tasks in many real-world scenarios, such as image classification and recommender systems.
no code implementations • 7 Feb 2022 • Junlong Lyu, Zhitang Chen, Chang Feng, Wenjing Cun, Shengyu Zhu, Yanhui Geng, Zhijie Xu, Yongwei Chen
Invertible neural networks based on Coupling Flows CFlows) have various applications such as image synthesis and data compression.
no code implementations • 23 Dec 2021 • Xiangle Cheng, James He, Shihan Xiao, Yingxue Zhang, Zhitang Chen, Pascal Poupart, FengLin Li
Machine learning is gaining growing momentum in various recent models for the dynamic analysis of information flows in data communications networks.
2 code implementations • 30 Nov 2021 • Keli Zhang, Shengyu Zhu, Marcus Kalander, Ignavier Ng, Junjian Ye, Zhitang Chen, Lujia Pan
$\texttt{gCastle}$ is an end-to-end Python toolbox for causal structure learning.
no code implementations • 29 Sep 2021 • Ruoyu Wang, Mingyang Yi, Shengyu Zhu, Zhitang Chen
In this work, we obviate these assumptions and tackle the OOD problem without explicitly recovering the causal feature.
no code implementations • 29 Sep 2021 • Mengyue Yang, Furui Liu, Xu Chen, Zhitang Chen, Jianye Hao, Jun Wang
In many real-world scenarios, such as image classification and recommender systems, it is evidence that representation learning can improve model's performance over multiple downstream tasks.
2 code implementations • 7 Jun 2021 • Antoine Grosnit, Rasul Tutunov, Alexandre Max Maraval, Ryan-Rhys Griffiths, Alexander I. Cowen-Rivers, Lin Yang, Lin Zhu, Wenlong Lyu, Zhitang Chen, Jun Wang, Jan Peters, Haitham Bou-Ammar
We introduce a method combining variational autoencoders (VAEs) and deep metric learning to perform Bayesian optimisation (BO) over high-dimensional and structured input spaces.
Ranked #1 on Molecular Graph Generation on ZINC
no code implementations • 2 Jun 2021 • Yunqi Wang, Furui Liu, Zhitang Chen, Qing Lian, Shoubo Hu, Jianye Hao, Yik-Chung Wu
Domain generalization aims to learn knowledge invariant across different distributions while semantically meaningful for downstream tasks from multiple source domains, to improve the model's generalization ability on unseen target domains.
1 code implementation • 14 May 2021 • Xiaoqiang Wang, Yali Du, Shengyu Zhu, Liangjun Ke, Zhitang Chen, Jianye Hao, Jun Wang
It is a long-standing question to discover causal relations among a set of variables in many empirical sciences.
no code implementations • 1 Jan 2021 • Peng Zhang, Furui Liu, Zhitang Chen, Jianye Hao, Jun Wang
Reinforcement Learning (RL) has shown great potential to deal with sequential decision-making problems.
no code implementations • 28 Dec 2020 • Minne Li, Mengyue Yang, Furui Liu, Xu Chen, Zhitang Chen, Jun Wang
The capability of imagining internally with a mental model of the world is vitally important for human cognition.
1 code implementation • 6 Oct 2020 • Xinwei Shen, Furui Liu, Hanze Dong, Qing Lian, Zhitang Chen, Tong Zhang
This paper proposes a Disentangled gEnerative cAusal Representation (DEAR) learning method under appropriate supervised information.
no code implementations • 2 Jul 2020 • Yifei Wang, Dan Peng, Furui Liu, Zhenguo Li, Zhitang Chen, Jiansheng Yang
Adversarial Training (AT) is proposed to alleviate the adversarial vulnerability of machine learning models by extracting only robust features from the input, which, however, inevitably leads to severe accuracy reduction as it discards the non-robust yet useful features.
no code implementations • 10 Jun 2020 • Zhuangyan Fang, Shengyu Zhu, Jiji Zhang, Yue Liu, Zhitang Chen, Yangbo He
Despite several advances in recent years, learning causal structures represented by directed acyclic graphs (DAGs) remains a challenging task in high dimensional settings when the graphs to be learned are not sparse.
no code implementations • 8 Jun 2020 • Vahid Partovi Nia, Xinlin Li, Masoud Asgharian, Shoubo Hu, Zhitang Chen, Yanhui Geng
Our simulation result show that the proposed adjustment significantly improves the performance of the causal direction test statistic for heterogeneous data.
2 code implementations • CVPR 2021 • Mengyue Yang, Furui Liu, Zhitang Chen, Xinwei Shen, Jianye Hao, Jun Wang
Learning disentanglement aims at finding a low dimensional representation which consists of multiple explanatory and generative factors of the observational data.
3 code implementations • 18 Nov 2019 • Ignavier Ng, Shengyu Zhu, Zhitang Chen, Zhuangyan Fang
Causal structure learning has been a challenging task in the past decades and several mainstream approaches such as constraint- and score-based methods have been studied with theoretical guarantees.
2 code implementations • 18 Oct 2019 • Ignavier Ng, Shengyu Zhu, Zhuangyan Fang, Haoyang Li, Zhitang Chen, Jun Wang
This paper studies the problem of learning causal structures from observational data.
no code implementations • 25 Sep 2019 • Tianshuo Cong, Dan Peng, Furui Liu, Zhitang Chen
Our experiments demonstrate our method is able to correctly identify the bivariate causal relationship between concepts in images and the representation learned enables a do-calculus manipulation to images, which generates artificial images that might possibly break the physical law depending on where we intervene the causal system.
no code implementations • 2 Sep 2019 • Zhitang Chen, Shengyu Zhu, Yue Liu, Tim Tse
We show our algorithm can be reduced to an eigen-decomposition task on a kernel matrix measuring intrinsic deviance/invariance.
no code implementations • 27 Aug 2019 • Shengyu Zhu, Biao Chen, Zhitang Chen, Pengfei Yang
With Sanov's theorem, we derive a sufficient condition for one-sample tests to achieve the optimal error exponent in the universal setting, i. e., for any distribution defining the alternative hypothesis.
no code implementations • 25 Jul 2019 • Shoubo Hu, Kun Zhang, Zhitang Chen, Laiwan Chan
Domain generalization (DG) aims to incorporate knowledge from multiple source domains into a single model that could generalize well on unseen target domains.
1 code implementation • ICLR 2020 • Shengyu Zhu, Ignavier Ng, Zhitang Chen
The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity.
no code implementations • 27 Nov 2018 • Xiaoxiao Wang, Xueying Guo, Jie Chuai, Zhitang Chen, Xin Liu
We evaluate the effectiveness of our algorithm based on a simulator built by real traces.
1 code implementation • NeurIPS 2018 • Shoubo Hu, Zhitang Chen, Vahid Partovi Nia, Laiwan Chan, Yanhui Geng
The inference of the causal relationship between a pair of observed variables is a fundamental problem in science, and most existing approaches are based on one single causal model.
no code implementations • 23 Sep 2018 • Shoubo Hu, Zhitang Chen, Laiwan Chan
Although nonstationary data are more common in the real world, most existing causal discovery methods do not take nonstationarity into consideration.
no code implementations • ICML 2018 • Thomas G. Dietterich, George Trimponias, Zhitang Chen
Exogenous state variables and rewards can slow down reinforcement learning by injecting uncontrolled variation into the reward signal.
no code implementations • 23 Feb 2018 • Shengyu Zhu, Biao Chen, Zhitang Chen
Given two sets of independent samples from unknown distributions $P$ and $Q$, a two-sample test decides whether to reject the null hypothesis that $P=Q$.
no code implementations • 21 Feb 2018 • Shengyu Zhu, Biao Chen, Pengfei Yang, Zhitang Chen
We show that two classes of Maximum Mean Discrepancy (MMD) based tests attain this optimality on $\mathbb R^d$, while the quadratic-time Kernel Stein Discrepancy (KSD) based tests achieve the maximum exponential decay rate under a relaxed level constraint.
no code implementations • NeurIPS 2012 • Zhitang Chen, Kun Zhang, Laiwan Chan
In conventional causal discovery, structural equation models (SEM) are directly applied to the observed variables, meaning that the causal effect can be represented as a function of the direct causes themselves.