no code implementations • 3 Feb 2023 • Dongjie Wang, Zhengzhang Chen, Jingchao Ni, Liang Tong, Zheng Wang, Yanjie Fu, Haifeng Chen
REASON consists of Topological Causal Discovery and Individual Causal Discovery.
1 code implementation • 26 Oct 2022 • Tianchun Wang, Wei Cheng, Dongsheng Luo, Wenchao Yu, Jingchao Ni, Liang Tong, Haifeng Chen, Xiang Zhang
Personalized Federated Learning (PFL) which collaboratively trains a federated model while considering local clients under privacy constraints has attracted much attention.
no code implementations • 25 Oct 2022 • Yulin Zhu, Liang Tong, Gaolei Li, Xiapu Luo, Kai Zhou
Graph Neural Networks (GNNs) are vulnerable to data poisoning attacks, which will generate a poisoned graph as the input to the GNN models.
1 code implementation • CVPR 2021 • Liang Tong, Zhengzhang Chen, Jingchao Ni, Wei Cheng, Dongjin Song, Haifeng Chen, Yevgeniy Vorobeychik
Moreover, we observe that open-set face recognition systems are more vulnerable than closed-set systems under different types of attacks.
no code implementations • 8 May 2020 • Liang Tong, Minzhe Guo, Atul Prakash, Yevgeniy Vorobeychik
We then experimentally demonstrate that our attacks indeed do not significantly change perceptual salience of the background, but are highly effective against classifiers robust to conventional attacks.
2 code implementations • ICLR 2020 • Tong Wu, Liang Tong, Yevgeniy Vorobeychik
Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks.
no code implementations • 20 Jun 2019 • Liang Tong, Aron Laszka, Chao Yan, Ning Zhang, Yevgeniy Vorobeychik
We then use these in a double-oracle framework to obtain an approximate equilibrium of the game, which in turn yields a robust stochastic policy for the defender.
1 code implementation • ICML 2018 • Liang Tong, Sixie Yu, Scott Alfeld, Yevgeniy Vorobeychik
We present an algorithm for computing this equilibrium, and show through extensive experiments that equilibrium models are significantly more robust than conventional regularized linear regression.
no code implementations • 28 Aug 2017 • Liang Tong, Bo Li, Chen Hajaj, Chaowei Xiao, Ning Zhang, Yevgeniy Vorobeychik
A conventional approach to evaluate ML robustness to such attacks, as well as to design robust ML, is by considering simplified feature-space models of attacks, where the attacker changes ML features directly to effect evasion, while minimizing or constraining the magnitude of this change.