no code implementations • ECCV 2020 • Chengfeng Wen, Yang Guo, Xianfeng Gu
Based on Teichm\""uller theory, this mapping space is generated by the Beltrami coefficients, which are infinitesimally Teichm\""uller equivalent to $0$.
no code implementations • 19 Aug 2023 • Zhihao Wen, Yuan Fang, Yihan Liu, Yang Guo, Shuji Hao
We design a novel graph prompting function to reformulate the downstream task into a similar template as the pretext task in pre-training, thereby narrowing the objective gap.
no code implementations • 27 May 2023 • Nils Palumbo, Yang Guo, Xi Wu, Jiefeng Chen, YIngyu Liang, Somesh Jha
Nevertheless, under recent strong adversarial attacks (GMSA, which has been shown to be much more effective than AutoAttack against transduction), Goldwasser et al.'s work was shown to have low performance in a practical deep-learning setting.
no code implementations • 12 Apr 2022 • Melih C. Yesilli, Jisheng Chen, Firas A. Khasawneh, Yang Guo
Comparing our results with the heuristic threshold selection approach shows good agreement with mean accuracies as high as 95\%.
1 code implementation • ICLR 2022 • Jiefeng Chen, Xi Wu, Yang Guo, YIngyu Liang, Somesh Jha
There has been emerging interest in using transductive learning for adversarial robustness (Goldwasser et al., NeurIPS 2020; Wu et al., ICML 2020; Wang et al., ArXiv 2021).
1 code implementation • 15 Oct 2021 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu, Jiayu Tang, Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Ye Liu, Qilong Zhang, Lianli Gao, Yunrui Yu, Xitong Gao, Zhe Zhao, Daquan Lin, Jiadong Lin, Chuanbiao Song, ZiHao Wang, Zhennan Wu, Yang Guo, Jiequan Cui, Xiaogang Xu, Pengguang Chen
Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years.
no code implementations • 18 Aug 2021 • Munan Ning, Cheng Bian, Dong Wei, Chenglang Yuan, Yaohua Wang, Yang Guo, Kai Ma, Yefeng Zheng
Domain shift happens in cross-domain scenarios commonly because of the wide gaps between different domains: when applying a deep learning model well-trained in one domain to another target domain, the model usually performs poorly.
no code implementations • 13 Aug 2021 • Yang Guo, Xuekui Zhang, Fatemeh Esfahani, Venkatesh Srinivasan, Alex Thomo, Li Xing
To make the previous PA focus more on dense subgraphs, we propose a multi-stage graph peeling algorithm (M-PA) that has a two-stage data screening procedure added before the previous PA. After removing vertices from the graph based on the user-defined thresholds, we can reduce the graph complexity largely and without affecting the vertices in subgraphs that we are interested in.
no code implementations • 15 Jun 2021 • Jiefeng Chen, Yang Guo, Xi Wu, Tianqi Li, Qicheng Lao, YIngyu Liang, Somesh Jha
Compared to traditional "test-time" defenses, these defense mechanisms "dynamically retrain" the model based on test time input via transductive learning; and theoretically, attacking these defenses boils down to bilevel optimization, which seems to raise the difficulty for adaptive attacks.
no code implementations • 10 Jun 2021 • Yang Guo, Tarique Anwar, Jian Yang, Jia Wu
As the process should be socially and economically profitable, the task of vehicle dispatching is highly challenging, specially due to the time-varying travel demands and traffic conditions.
no code implementations • 30 Apr 2021 • Yiming Sun, Yang Guo, Joel A. Tropp, Madeleine Udell
The TRP map is formed as the Khatri-Rao product of several smaller random projections, and is compatible with any base random projection including sparse maps, which enable dimension reduction with very low query cost and no floating point operations.
no code implementations • 28 Mar 2021 • Xiguo Yuan, Yuan Zhao, Yang Guo, Linmei Ge, Wei Liu, Shiyu Wen, Qi Li, Zhangbo Wan, Peina Zheng, Tao Guo, Zhida Li, Martin Peifer, Yupeng Cun
In the past decade, a variety of methods have been developed for subclonal reconstruction using bulk tumor sequencing data.
no code implementations • ICCV 2021 • Min Zhang, Yang Guo, Na lei, Zhou Zhao, Jianfeng Wu, Xiaoyin Xu, Yalin Wang, Xianfeng GU
Shape analysis has been playing an important role in early diagnosis and prognosis of neurodegenerative diseases such as Alzheimer's diseases (AD).
no code implementations • 1 Jan 2021 • Xi Wu, Yang Guo, Tianqi Li, Jiefeng Chen, Qicheng Lao, YIngyu Liang, Somesh Jha
On the positive side, we show that, if one is allowed to access the training data, then Domain Adversarial Neural Networks (${\sf DANN}$), an algorithm designed for unsupervised domain adaptation, can provide nontrivial robustness in the test-time maximin threat model against strong transfer attacks and adaptive fixed point attacks.
no code implementations • 20 Jul 2020 • Munan Ning, Cheng Bian, Donghuan Lu, Hong-Yu Zhou, Shuang Yu, Chenglang Yuan, Yang Guo, Yaohua Wang, Kai Ma, Yefeng Zheng
Primary angle closure glaucoma (PACG) is the leading cause of irreversible blindness among Asian people.
1 code implementation • ICLR 2020 • Dongsheng An, Yang Guo, Na lei, Zhongxuan Luo, Shing-Tung Yau, Xianfeng GU
In order to tackle the both problems, we explicitly separate the manifold embedding and the optimal transportation; the first part is carried out using an autoencoder to map the images onto the latent space; the second part is accomplished using a GPU-based convex optimization to find the discontinuous transportation maps.
no code implementations • 22 Apr 2020 • Xi Wu, Yang Guo, Jiefeng Chen, YIngyu Liang, Somesh Jha, Prasad Chalasani
Recent studies provide hints and failure examples for domain invariant representation learning, a common approach for this problem, but the explanations provided are somewhat different and do not provide a unified picture.
no code implementations • ECCV 2020 • Dongsheng An, Yang Guo, Min Zhang, Xin Qi, Na lei, Shing-Tung Yau, Xianfeng GU
Though generative adversarial networks (GANs) areprominent models to generate realistic and crisp images, they often encounter the mode collapse problems and arehard to train, which comes from approximating the intrinsicdiscontinuous distribution transform map with continuousDNNs.
no code implementations • 18 Nov 2019 • Yang Guo, Zhengyuan Liu, Pavitra Krishnswamy, Savitha Ramasamy
Real-world clinical time series data sets exhibit a high prevalence of missing values.
2 code implementations • 24 Apr 2019 • Yiming Sun, Yang Guo, Charlene Luo, Joel Tropp, Madeleine Udell
This paper describes a new algorithm for computing a low-Tucker-rank approximation of a tensor.
no code implementations • 8 Feb 2019 • Na lei, Yang Guo, Dongsheng An, Xin Qi, Zhongxuan Luo, Shing-Tung Yau, Xianfeng GU
This work builds the connection between the regularity theory of optimal transportation map, Monge-Amp\`{e}re equation and GANs, which gives a theoretic understanding of the major drawbacks of GANs: convergence difficulty and mode collapse.
no code implementations • 16 Sep 2018 • Huidong Liu, Yang Guo, Na lei, Zhixin Shu, Shing-Tung Yau, Dimitris Samaras, Xianfeng GU
Experimental results on an eight-Gaussian dataset show that the proposed OT can handle multi-cluster distributions.