2 code implementations • NeurIPS 2020 • Lu Wang, Xuanqing Liu, Jin-Feng Yi, Yuan Jiang, Cho-Jui Hsieh
Metric learning is an important family of algorithms for classification and similarity search, but the robustness of learned metrics against small adversarial perturbations is less studied.
1 code implementation • 11 May 2020 • Lu Wang, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Yuan Jiang
By constraining adversarial perturbations in a low-dimensional subspace via spanning an auxiliary unlabeled dataset, the spanning attack significantly improves the query efficiency of a wide variety of existing black-box attacks.
1 code implementation • ICLR 2020 • Yisen Wang, Difan Zou, Jin-Feng Yi, James Bailey, Xingjun Ma, Quanquan Gu
In this paper, we investigate the distinctive influence of misclassified and correctly classified examples on the final robustness of adversarial training.
no code implementations • 7 Dec 2019 • Yongshun Gong, Zhibin Li, Jian Zhang, Wei Liu, Jin-Feng Yi
In this paper, this specific problem is termed as potential passenger flow (PPF) prediction, which is a novel and important study connected with urban computing and intelligent transportation systems.
1 code implementation • NeurIPS 2019 • Xingyu Cai, Tingyang Xu, Jin-Feng Yi, Junzhou Huang, Sanguthevar Rajasekaran
Dynamic Time Warping (DTW) is widely used as a similarity measure in various domains.
4 code implementations • ICCV 2019 • Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jin-Feng Yi, James Bailey
In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes).
Ranked #43 on Image Classification on Clothing1M
no code implementations • 2 Jul 2019 • Zhibin Li, Jian Zhang, Qiang Wu, Yongshun Gong, Jin-Feng Yi, Christina Kirsch
In this paper, we formulate our prediction task as a multiple kernel learning problem with missing kernels.
no code implementations • 16 Jun 2019 • Yifan Ding, Liqiang Wang, huan zhang, Jin-Feng Yi, Deliang Fan, Boqing Gong
As deep neural networks (DNNs) have become increasingly important and popular, the robustness of DNNs is the key to the safety of both the Internet and the physical world.
1 code implementation • 10 Jun 2019 • Lu Wang, Xuanqing Liu, Jin-Feng Yi, Zhi-Hua Zhou, Cho-Jui Hsieh
Furthermore, we show that dual solutions for these QP problems could give us a valid lower bound of the adversarial perturbation that can be used for formal robustness verification, giving us a nice view of attack/verification for NN models.
no code implementations • 10 Jun 2019 • Dong-Dong Chen, Yisen Wang, Jin-Feng Yi, Zaiyi Chen, Zhi-Hua Zhou
Unsupervised domain adaptation aims to transfer the classifier learned from the source domain to the target domain in an unsupervised manner.
no code implementations • 28 May 2019 • Pengcheng Li, Jin-Feng Yi, Bo-Wen Zhou, Lijun Zhang
In this paper, we improve the robustness of DNNs by utilizing techniques of Distance Metric Learning.
no code implementations • ICLR 2019 • Minhao Cheng, Thong Le, Pin-Yu Chen, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh
We study the problem of attacking machine learning models in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.
no code implementations • 1 May 2019 • Yongshun Gong, Jin-Feng Yi, Dong-Dong Chen, Jian Zhang, Jiayu Zhou, Zhihua Zhou
In this paper, we aim to infer the significance of every item's appearance in consumer decision making and identify the group of items that are suitable for screenless shopping.
no code implementations • 24 Apr 2019 • Xinlei Pan, Wei-Yao Wang, Xiaoshuai Zhang, Bo Li, Jin-Feng Yi, Dawn Song
To the best of our knowledge, this is the first work to investigate privacy leakage in DRL settings and we show that DRL-based agents do potentially leak privacy-sensitive information from the trained policies.
no code implementations • 22 Apr 2019 • Lan-Zhe Guo, Yu-Feng Li, Ming Li, Jin-Feng Yi, Bo-Wen Zhou, Zhi-Hua Zhou
We guide the optimization of label quality through a small amount of validation data, and to ensure the safeness of performance while maximizing performance gain.
no code implementations • NeurIPS 2018 • Mingrui Liu, Zhe Li, Xiaoyu Wang, Jin-Feng Yi, Tianbao Yang
Negative curvature descent (NCD) method has been utilized to design deterministic or stochastic algorithms for non-convex optimization aiming at finding second-order stationary points or local minima.
2 code implementations • ICLR 2019 • Jinghui Chen, Dongruo Zhou, Jin-Feng Yi, Quanquan Gu
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack.
1 code implementation • 14 Sep 2018 • Lingfei Wu, Ian En-Hsu Yen, Jin-Feng Yi, Fangli Xu, Qi Lei, Michael Witbrock
The proposed kernel does not suffer from the issue of diagonal dominance while naturally enjoys a \emph{Random Features} (RF) approximation, which reduces the computational complexity of existing DTW-based techniques from quadratic to linear in terms of both the number and the length of time-series.
no code implementations • 13 Sep 2018 • Pengcheng Li, Jin-Feng Yi, Lijun Zhang
To conduct black-box attack, a popular approach aims to train a substitute model based on the information queried from the target DNN.
1 code implementation • 9 Sep 2018 • Yali Du, Meng Fang, Jin-Feng Yi, Jun Cheng, DaCheng Tao
First, we initialize an adversarial example with a gray color image on which every pixel has roughly the same importance for the target model.
no code implementations • ICLR 2019 • Zaiyi Chen, Zhuoning Yuan, Jin-Feng Yi, Bo-Wen Zhou, Enhong Chen, Tianbao Yang
For example, there is still a lack of theories of convergence for SGD and its variants that use stagewise step size and return an averaged solution in practice.
2 code implementations • ECCV 2018 • Dong Su, huan zhang, Hongge Chen, Jin-Feng Yi, Pin-Yu Chen, Yupeng Gao
The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition.
no code implementations • 18 Jul 2018 • Adnan Siraj Rakin, Jin-Feng Yi, Boqing Gong, Deliang Fan
Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks.
1 code implementation • 12 Jul 2018 • Minhao Cheng, Thong Le, Pin-Yu Chen, Jin-Feng Yi, huan zhang, Cho-Jui Hsieh
We study the problem of attacking a machine learning model in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.
no code implementations • 27 Jun 2018 • Yuanyu Wan, Jin-Feng Yi, Lijun Zhang
Then, for each partially observed column, we recover it by finding a vector which lies in the recovered column space and consists of the observed entries.
no code implementations • 20 Jun 2018 • Zhao Kang, Xiao Lu, Jin-Feng Yi, Zenglin Xu
There are two possible reasons for the failure: (i) most existing MKL methods assume that the optimal kernel is a linear combination of base kernels, which may not hold true; and (ii) some kernel weights are inappropriately assigned due to noises and carelessly designed algorithms.
1 code implementation • 30 May 2018 • Chun-Chen Tu, Pai-Shun Ting, Pin-Yu Chen, Sijia Liu, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Shin-Ming Cheng
Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting.
2 code implementations • NAACL 2018 • Mo Yu, Xiaoxiao Guo, Jin-Feng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, Bo-Wen Zhou
We study few-shot learning in natural language domains.
1 code implementation • 3 Mar 2018 • Minhao Cheng, Jin-Feng Yi, Pin-Yu Chen, huan zhang, Cho-Jui Hsieh
In this paper, we study the much more challenging problem of crafting adversarial examples for sequence-to-sequence (seq2seq) models, whose inputs are discrete text strings and outputs have an almost infinite number of possibilities.
1 code implementation • 14 Feb 2018 • Chao Shang, Qinqing Liu, Ko-Shin Chen, Jiangwen Sun, Jin Lu, Jin-Feng Yi, Jinbo Bi
The proposed GCN model, which we call edge attention-based multi-relational GCN (EAGCN), jointly learns attention weights and node features in graph convolution.
no code implementations • 13 Feb 2018 • Mengying Sun, Fengyi Tang, Jin-Feng Yi, Fei Wang, Jiayu Zhou
The surging availability of electronic medical records (EHR) leads to increased research interests in medical predictive modeling.
1 code implementation • ICLR 2018 • Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel
Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.
2 code implementations • ACL 2018 • Hongge Chen, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Cho-Jui Hsieh
Our extensive experiments show that our algorithm can successfully craft visually-similar adversarial examples with randomly targeted captions or keywords, and the adversarial examples can be made highly transferable to other image captioning systems.
6 code implementations • 13 Sep 2017 • Pin-Yu Chen, Yash Sharma, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh
Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples - a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify.
no code implementations • 26 Aug 2017 • Mo Yu, Xiaoxiao Guo, Jin-Feng Yi, Shiyu Chang, Saloni Potdar, Gerald Tesauro, Haoyu Wang, Bo-Wen Zhou
We propose a new method to measure task similarities with cross-task transfer performance matrix for the deep learning scenario.
5 code implementations • 14 Aug 2017 • Pin-Yu Chen, huan zhang, Yash Sharma, Jin-Feng Yi, Cho-Jui Hsieh
However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.
no code implementations • 12 Feb 2017 • Qi Lei, Jin-Feng Yi, Roman Vaculin, Lingfei Wu, Inderjit S. Dhillon
A considerable amount of clustering algorithms take instance-feature matrices as their inputs.
no code implementations • NeurIPS 2017 • Lijun Zhang, Tianbao Yang, Jin-Feng Yi, Rong Jin, Zhi-Hua Zhou
When multiple gradients are accessible to the learner, we first demonstrate that the dynamic regret of strongly convex functions can be upper bounded by the minimum of the path-length and the squared path-length.
no code implementations • 16 May 2016 • Tianbao Yang, Lijun Zhang, Rong Jin, Jin-Feng Yi
Secondly, we present a lower bound with noisy gradient feedback and then show that we can achieve optimal dynamic regrets under a stochastic gradient feedback and two-point bandit feedback.
no code implementations • 3 Apr 2013 • Qi Qian, Rong Jin, Jin-Feng Yi, Lijun Zhang, Shenghuo Zhu
Although stochastic gradient descent (SGD) has been successfully applied to improve the efficiency of DML, it can still be computationally expensive because in order to ensure that the solution is a PSD matrix, it has to, at every iteration, project the updated distance metric onto the PSD cone, an expensive operation.
no code implementations • NeurIPS 2012 • Mehrdad Mahdavi, Tianbao Yang, Rong Jin, Shenghuo Zhu, Jin-Feng Yi
Although many variants of stochastic gradient descent have been proposed for large-scale convex optimization, most of them require projecting the solution at {\it each} iteration to ensure that the obtained solution stays within the feasible domain.