Search Results for author: Jin-Feng Yi

Found 41 papers, 19 papers with code

Provably Robust Metric Learning

2 code implementations NeurIPS 2020 Lu Wang, Xuanqing Liu, Jin-Feng Yi, Yuan Jiang, Cho-Jui Hsieh

Metric learning is an important family of algorithms for classification and similarity search, but the robustness of learned metrics against small adversarial perturbations is less studied.

Metric Learning

Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data

1 code implementation11 May 2020 Lu Wang, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Yuan Jiang

By constraining adversarial perturbations in a low-dimensional subspace via spanning an auxiliary unlabeled dataset, the spanning attack significantly improves the query efficiency of a wide variety of existing black-box attacks.

Improving Adversarial Robustness Requires Revisiting Misclassified Examples

1 code implementation ICLR 2020 Yisen Wang, Difan Zou, Jin-Feng Yi, James Bailey, Xingjun Ma, Quanquan Gu

In this paper, we investigate the distinctive influence of misclassified and correctly classified examples on the final robustness of adversarial training.

Adversarial Robustness

Potential Passenger Flow Prediction: A Novel Study for Urban Transportation Development

no code implementations7 Dec 2019 Yongshun Gong, Zhibin Li, Jian Zhang, Wei Liu, Jin-Feng Yi

In this paper, this specific problem is termed as potential passenger flow (PPF) prediction, which is a novel and important study connected with urban computing and intelligent transportation systems.

MULTI-VIEW LEARNING Recommendation Systems

Symmetric Cross Entropy for Robust Learning with Noisy Labels

4 code implementations ICCV 2019 Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jin-Feng Yi, James Bailey

In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes).

Learning with noisy labels

Sample Adaptive Multiple Kernel Learning for Failure Prediction of Railway Points

no code implementations2 Jul 2019 Zhibin Li, Jian Zhang, Qiang Wu, Yongshun Gong, Jin-Feng Yi, Christina Kirsch

In this paper, we formulate our prediction task as a multiple kernel learning problem with missing kernels.

Defending Against Adversarial Attacks Using Random Forests

no code implementations16 Jun 2019 Yifan Ding, Liqiang Wang, huan zhang, Jin-Feng Yi, Deliang Fan, Boqing Gong

As deep neural networks (DNNs) have become increasingly important and popular, the robustness of DNNs is the key to the safety of both the Internet and the physical world.

Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective

1 code implementation10 Jun 2019 Lu Wang, Xuanqing Liu, Jin-Feng Yi, Zhi-Hua Zhou, Cho-Jui Hsieh

Furthermore, we show that dual solutions for these QP problems could give us a valid lower bound of the adversarial perturbation that can be used for formal robustness verification, giving us a nice view of attack/verification for NN models.

valid

Joint Semantic Domain Alignment and Target Classifier Learning for Unsupervised Domain Adaptation

no code implementations10 Jun 2019 Dong-Dong Chen, Yisen Wang, Jin-Feng Yi, Zaiyi Chen, Zhi-Hua Zhou

Unsupervised domain adaptation aims to transfer the classifier learned from the source domain to the target domain in an unsupervised manner.

Unsupervised Domain Adaptation

Inferring the Importance of Product Appearance: A Step Towards the Screenless Revolution

no code implementations1 May 2019 Yongshun Gong, Jin-Feng Yi, Dong-Dong Chen, Jian Zhang, Jiayu Zhou, Zhihua Zhou

In this paper, we aim to infer the significance of every item's appearance in consumer decision making and identify the group of items that are suitable for screenless shopping.

Decision Making

Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach

no code implementations ICLR 2019 Minhao Cheng, Thong Le, Pin-Yu Chen, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh

We study the problem of attacking machine learning models in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.

BIG-bench Machine Learning

How You Act Tells a Lot: Privacy-Leakage Attack on Deep Reinforcement Learning

no code implementations24 Apr 2019 Xinlei Pan, Wei-Yao Wang, Xiaoshuai Zhang, Bo Li, Jin-Feng Yi, Dawn Song

To the best of our knowledge, this is the first work to investigate privacy leakage in DRL settings and we show that DRL-based agents do potentially leak privacy-sensitive information from the trained policies.

Autonomous Driving Continuous Control +3

Reliable Weakly Supervised Learning: Maximize Gain and Maintain Safeness

no code implementations22 Apr 2019 Lan-Zhe Guo, Yu-Feng Li, Ming Li, Jin-Feng Yi, Bo-Wen Zhou, Zhi-Hua Zhou

We guide the optimization of label quality through a small amount of validation data, and to ensure the safeness of performance while maximizing performance gain.

Weakly-supervised Learning

Adaptive Negative Curvature Descent with Applications in Non-convex Optimization

no code implementations NeurIPS 2018 Mingrui Liu, Zhe Li, Xiaoyu Wang, Jin-Feng Yi, Tianbao Yang

Negative curvature descent (NCD) method has been utilized to design deterministic or stochastic algorithms for non-convex optimization aiming at finding second-order stationary points or local minima.

A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks

2 code implementations ICLR 2019 Jinghui Chen, Dongruo Zhou, Jin-Feng Yi, Quanquan Gu

Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack.

Adversarial Attack

Random Warping Series: A Random Features Method for Time-Series Embedding

1 code implementation14 Sep 2018 Lingfei Wu, Ian En-Hsu Yen, Jin-Feng Yi, Fangli Xu, Qi Lei, Michael Witbrock

The proposed kernel does not suffer from the issue of diagonal dominance while naturally enjoys a \emph{Random Features} (RF) approximation, which reduces the computational complexity of existing DTW-based techniques from quadratic to linear in terms of both the number and the length of time-series.

Clustering Dynamic Time Warping +2

Query-Efficient Black-Box Attack by Active Learning

no code implementations13 Sep 2018 Pengcheng Li, Jin-Feng Yi, Lijun Zhang

To conduct black-box attack, a popular approach aims to train a substitute model based on the information queried from the target DNN.

Active Learning Adversarial Attack

Towards Query Efficient Black-box Attacks: An Input-free Perspective

1 code implementation9 Sep 2018 Yali Du, Meng Fang, Jin-Feng Yi, Jun Cheng, DaCheng Tao

First, we initialize an adversarial example with a gray color image on which every pixel has roughly the same importance for the target model.

Universal Stagewise Learning for Non-Convex Problems with Convergence on Averaged Solutions

no code implementations ICLR 2019 Zaiyi Chen, Zhuoning Yuan, Jin-Feng Yi, Bo-Wen Zhou, Enhong Chen, Tianbao Yang

For example, there is still a lack of theories of convergence for SGD and its variants that use stagewise step size and return an averaged solution in practice.

Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

2 code implementations ECCV 2018 Dong Su, huan zhang, Hongge Chen, Jin-Feng Yi, Pin-Yu Chen, Yupeng Gao

The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition.

General Classification Image Classification

Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach

1 code implementation12 Jul 2018 Minhao Cheng, Thong Le, Pin-Yu Chen, Jin-Feng Yi, huan zhang, Cho-Jui Hsieh

We study the problem of attacking a machine learning model in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.

BIG-bench Machine Learning

Matrix Completion from Non-Uniformly Sampled Entries

no code implementations27 Jun 2018 Yuanyu Wan, Jin-Feng Yi, Lijun Zhang

Then, for each partially observed column, we recover it by finding a vector which lies in the recovered column space and consists of the observed entries.

Matrix Completion

Self-weighted Multiple Kernel Learning for Graph-based Clustering and Semi-supervised Classification

no code implementations20 Jun 2018 Zhao Kang, Xiao Lu, Jin-Feng Yi, Zenglin Xu

There are two possible reasons for the failure: (i) most existing MKL methods assume that the optimal kernel is a linear combination of base kernels, which may not hold true; and (ii) some kernel weights are inappropriately assigned due to noises and carelessly designed algorithms.

Clustering General Classification

AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks

1 code implementation30 May 2018 Chun-Chen Tu, Pai-Shun Ting, Pin-Yu Chen, Sijia Liu, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Shin-Ming Cheng

Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting.

Adversarial Robustness

Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples

1 code implementation3 Mar 2018 Minhao Cheng, Jin-Feng Yi, Pin-Yu Chen, huan zhang, Cho-Jui Hsieh

In this paper, we study the much more challenging problem of crafting adversarial examples for sequence-to-sequence (seq2seq) models, whose inputs are discrete text strings and outputs have an almost infinite number of possibilities.

Image Classification Machine Translation +2

Edge Attention-based Multi-Relational Graph Convolutional Networks

1 code implementation14 Feb 2018 Chao Shang, Qinqing Liu, Ko-Shin Chen, Jiangwen Sun, Jin Lu, Jin-Feng Yi, Jinbo Bi

The proposed GCN model, which we call edge attention-based multi-relational GCN (EAGCN), jointly learns attention weights and node features in graph convolution.

Attribute

Identify Susceptible Locations in Medical Records via Adversarial Attacks on Deep Predictive Models

no code implementations13 Feb 2018 Mengying Sun, Fengyi Tang, Jin-Feng Yi, Fei Wang, Jiayu Zhou

The surging availability of electronic medical records (EHR) leads to increased research interests in medical predictive modeling.

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

1 code implementation ICLR 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel

Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.

Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning

2 code implementations ACL 2018 Hongge Chen, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Cho-Jui Hsieh

Our extensive experiments show that our algorithm can successfully craft visually-similar adversarial examples with randomly targeted captions or keywords, and the adversarial examples can be made highly transferable to other image captioning systems.

Caption Generation Image Captioning

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

6 code implementations13 Sep 2017 Pin-Yu Chen, Yash Sharma, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh

Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples - a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify.

Adversarial Attack Adversarial Robustness

Robust Task Clustering for Deep Many-Task Learning

no code implementations26 Aug 2017 Mo Yu, Xiaoxiao Guo, Jin-Feng Yi, Shiyu Chang, Saloni Potdar, Gerald Tesauro, Haoyu Wang, Bo-Wen Zhou

We propose a new method to measure task similarities with cross-task transfer performance matrix for the deep learning scenario.

Clustering Few-Shot Learning +7

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

5 code implementations14 Aug 2017 Pin-Yu Chen, huan zhang, Yash Sharma, Jin-Feng Yi, Cho-Jui Hsieh

However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.

Adversarial Attack Adversarial Defense +3

Improved Dynamic Regret for Non-degenerate Functions

no code implementations NeurIPS 2017 Lijun Zhang, Tianbao Yang, Jin-Feng Yi, Rong Jin, Zhi-Hua Zhou

When multiple gradients are accessible to the learner, we first demonstrate that the dynamic regret of strongly convex functions can be upper bounded by the minimum of the path-length and the squared path-length.

Tracking Slowly Moving Clairvoyant: Optimal Dynamic Regret of Online Learning with True and Noisy Gradient

no code implementations16 May 2016 Tianbao Yang, Lijun Zhang, Rong Jin, Jin-Feng Yi

Secondly, we present a lower bound with noisy gradient feedback and then show that we can achieve optimal dynamic regrets under a stochastic gradient feedback and two-point bandit feedback.

Efficient Distance Metric Learning by Adaptive Sampling and Mini-Batch Stochastic Gradient Descent (SGD)

no code implementations3 Apr 2013 Qi Qian, Rong Jin, Jin-Feng Yi, Lijun Zhang, Shenghuo Zhu

Although stochastic gradient descent (SGD) has been successfully applied to improve the efficiency of DML, it can still be computationally expensive because in order to ensure that the solution is a PSD matrix, it has to, at every iteration, project the updated distance metric onto the PSD cone, an expensive operation.

Computational Efficiency Metric Learning

Stochastic Gradient Descent with Only One Projection

no code implementations NeurIPS 2012 Mehrdad Mahdavi, Tianbao Yang, Rong Jin, Shenghuo Zhu, Jin-Feng Yi

Although many variants of stochastic gradient descent have been proposed for large-scale convex optimization, most of them require projecting the solution at {\it each} iteration to ensure that the obtained solution stays within the feasible domain.

Cannot find the paper you are looking for? You can Submit a new open access paper.