Search Results for author: Yinpeng Dong

Found 77 papers, 43 papers with code

Omniview-Tuning: Boosting Viewpoint Invariance of Vision-Language Pre-training Models

no code implementations18 Apr 2024 Shouwei Ruan, Yinpeng Dong, Hanqing Liu, Yao Huang, Hang Su, Xingxing Wei

Vision-Language Pre-training (VLP) models like CLIP have achieved remarkable success in computer vision and particularly demonstrated superior robustness to distribution shifts of 2D images.

Exploring the Transferability of Visual Prompting for Multimodal Large Language Models

1 code implementation17 Apr 2024 Yichi Zhang, Yinpeng Dong, Siyuan Zhang, Tianzan Min, Hang Su, Jun Zhu

To achieve this, we propose Transferable Visual Prompting (TVP), a simple and effective approach to generate visual prompts that can transfer to different models and improve their performance on downstream tasks after trained on only one model.

Hallucination Multimodal Reasoning +2

FaceCat: Enhancing Face Recognition Security with a Unified Generative Model Framework

no code implementations14 Apr 2024 Jiawei Chen, Xiao Yang, Yinpeng Dong, Hang Su, Jianteng Peng, Zhaoxia Yin

Motivated by the rich structural and detailed features of face generative models, we propose FaceCat which utilizes the face generative model as a pre-trained model to improve the performance of FAS and FAD.

Face Anti-Spoofing Face Recognition +1

Embodied Active Defense: Leveraging Recurrent Feedback to Counter Adversarial Patches

no code implementations31 Mar 2024 Lingxuan Wu, Xiao Yang, Yinpeng Dong, Liuwei Xie, Hang Su, Jun Zhu

The vulnerability of deep neural networks to adversarial patches has motivated numerous defense strategies for boosting model robustness.

Face Recognition object-detection +1

Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise and Reconstruction

no code implementations28 Feb 2024 Tong Liu, Yingjie Zhang, Zhe Zhao, Yinpeng Dong, Guozhu Meng, Kai Chen

We evaluate DRA across various open-source and close-source models, showcasing state-of-the-art jailbreak success rates and attack efficiency.

Reconstruction Attack

BSPA: Exploring Black-box Stealthy Prompt Attacks against Image Generators

no code implementations23 Feb 2024 Yu Tian, Xiao Yang, Yinpeng Dong, Heming Yang, Hang Su, Jun Zhu

It allows users to design specific prompts to generate realistic images through some black-box APIs.

Focus on Hiders: Exploring Hidden Threats for Enhancing Adversarial Training

no code implementations12 Dec 2023 Qian Li, Yuxiao Hu, Yinpeng Dong, Dongxiao Zhang, Yuntian Chen

Adversarial training is often formulated as a min-max problem, however, concentrating only on the worst adversarial examples causes alternating repetitive confusion of the model, i. e., previously defended or correctly classified samples are not defensible or accurately classifiable in subsequent adversarial training.

Machine Vision Therapy: Multimodal Large Language Models Can Enhance Visual Robustness via Denoising In-Context Learning

1 code implementation5 Dec 2023 Zhuo Huang, Chang Liu, Yinpeng Dong, Hang Su, Shibao Zheng, Tongliang Liu

Concretely, by estimating a transition matrix that captures the probability of one class being confused with another, an instruction containing a correct exemplar and an erroneous one from the most probable noisy class can be constructed.

Denoising In-Context Learning

Evil Geniuses: Delving into the Safety of LLM-based Agents

1 code implementation20 Nov 2023 Yu Tian, Xiao Yang, Jingyuan Zhang, Yinpeng Dong, Hang Su

Rapid advancements in large language models (LLMs) have revitalized in LLM-based agents, exhibiting impressive human-like behaviors and cooperative capabilities in various scenarios.

Specificity

How Robust is Google's Bard to Adversarial Image Attacks?

1 code implementation21 Sep 2023 Yinpeng Dong, Huanran Chen, Jiawei Chen, Zhengwei Fang, Xiao Yang, Yichi Zhang, Yu Tian, Hang Su, Jun Zhu

By attacking white-box surrogate vision encoders or MLLMs, the generated adversarial examples can mislead Bard to output wrong image descriptions with a 22% success rate based solely on the transferability.

Adversarial Robustness Chatbot +1

Robustness and Generalizability of Deepfake Detection: A Study with Diffusion Models

1 code implementation5 Sep 2023 Haixu Song, Shiyu Huang, Yinpeng Dong, Wei-Wei Tu

The rise of deepfake images, especially of well-known personalities, poses a serious threat to the dissemination of authentic information.

DeepFake Detection Face Swapping

Root Pose Decomposition Towards Generic Non-rigid 3D Reconstruction with Monocular Videos

no code implementations ICCV 2023 Yikai Wang, Yinpeng Dong, Fuchun Sun, Xiao Yang

The key idea of our method, Root Pose Decomposition (RPD), is to maintain a per-frame root pose transformation, meanwhile building a dense field with local transformations to rectify the root pose.

3D Reconstruction Object

Improving Viewpoint Robustness for Visual Recognition via Adversarial Training

1 code implementation21 Jul 2023 Shouwei Ruan, Yinpeng Dong, Hang Su, Jianteng Peng, Ning Chen, Xingxing Wei

Experimental results show that VIAT significantly improves the viewpoint robustness of various image classifiers based on the diversity of adversarial viewpoints generated by GMVFool.

Towards Viewpoint-Invariant Visual Recognition via Adversarial Training

1 code implementation ICCV 2023 Shouwei Ruan, Yinpeng Dong, Hang Su, Jianteng Peng, Ning Chen, Xingxing Wei

Visual recognition models are not invariant to viewpoint changes in the 3D world, as different viewing directions can dramatically affect the predictions given the same object.

Distributional Modeling for Location-Aware Adversarial Patches

1 code implementation28 Jun 2023 Xingxing Wei, Shouwei Ruan, Yinpeng Dong, Hang Su

In this paper, we propose the Distribution-Optimized Adversarial Patch (DOPatch), a novel method that optimizes a multimodal distribution of adversarial locations instead of individual ones.

Face Recognition

Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks

no code implementations16 Jun 2023 Hongcheng Gao, Hao Zhang, Yinpeng Dong, Zhijie Deng

Text-to-image (T2I) diffusion models (DMs) have shown promise in generating high-quality images from textual descriptions.

DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks

no code implementations15 Jun 2023 Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Yubo Chen, Hang Su, Xingxing Wei

In this paper, we propose DIFFender, a novel defense method that leverages a text-guided diffusion model to defend against adversarial patches.

Adversarial Defense Face Recognition +1

Robust Classification via a Single Diffusion Model

2 code implementations24 May 2023 Huanran Chen, Yinpeng Dong, Zhengyi Wang, Xiao Yang, Chengqi Duan, Hang Su, Jun Zhu

Since our method does not require training on particular adversarial attacks, we demonstrate that it is more generalizable to defend against multiple unseen threats.

Adversarial Defense Adversarial Robustness +2

Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning

1 code implementation7 May 2023 Shengfang Zhai, Yinpeng Dong, Qingni Shen, Shi Pu, Yuejian Fang, Hang Su

To gain a better understanding of the training process and potential risks of text-to-image synthesis, we perform a systematic investigation of backdoor attack on text-to-image diffusion models and propose BadT2I, a general multimodal backdoor attack framework that tampers with image synthesis in diverse semantic levels.

Backdoor Attack backdoor defense +2

Understanding the Robustness of 3D Object Detection with Bird's-Eye-View Representations in Autonomous Driving

1 code implementation CVPR 2023 Zijian Zhu, Yichi Zhang, Hai Chen, Yinpeng Dong, Shu Zhao, Wenbo Ding, Jiachen Zhong, Shibao Zheng

However, there still lacks a systematic understanding of the robustness of these vision-dependent BEV models, which is closely related to the safety of autonomous driving systems.

3D Object Detection Adversarial Robustness +2

Towards Effective Adversarial Textured 3D Meshes on Physical Face Recognition

1 code implementation CVPR 2023 Xiao Yang, Chang Liu, Longlong Xu, Yikai Wang, Yinpeng Dong, Ning Chen, Hang Su, Jun Zhu

The goal of this work is to develop a more reliable technique that can carry out an end-to-end evaluation of adversarial robustness for commercial systems.

Adversarial Robustness Face Recognition

Compacting Binary Neural Networks by Sparse Kernel Selection

no code implementations CVPR 2023 Yikai Wang, Wenbing Huang, Yinpeng Dong, Fuchun Sun, Anbang Yao

Binary Neural Network (BNN) represents convolution weights with 1-bit values, which enhances the efficiency of storage and computation.

Binarization

A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking

no code implementations28 Feb 2023 Chang Liu, Yinpeng Dong, Wenzhao Xiang, Xiao Yang, Hang Su, Jun Zhu, Yuefeng Chen, Yuan He, Hui Xue, Shibao Zheng

In our benchmark, we evaluate the robustness of 55 typical deep learning models on ImageNet with diverse architectures (e. g., CNNs, Transformers) and learning algorithms (e. g., normal supervised training, pre-training, adversarial training) under numerous adversarial attacks and out-of-distribution (OOD) datasets.

Adversarial Robustness Benchmarking +2

GNOT: A General Neural Operator Transformer for Operator Learning

2 code implementations28 Feb 2023 Zhongkai Hao, Zhengyi Wang, Hang Su, Chengyang Ying, Yinpeng Dong, Songming Liu, Ze Cheng, Jian Song, Jun Zhu

However, there are several challenges for learning operators in practical applications like the irregular mesh, multiple input functions, and complexity of the PDEs' solution.

Operator learning

Improving transferability of 3D adversarial attacks with scale and shear transformations

1 code implementation2 Nov 2022 Jinali Zhang, Yinpeng Dong, Jun Zhu, Jihong Zhu, Minchi Kuang, Xiaming Yuan

Extensive experiments show that the SS attack proposed in this paper can be seamlessly combined with the existing state-of-the-art (SOTA) 3D point cloud attack methods to form more powerful attack methods, and the SS attack improves the transferability over 3. 6 times compare to the baseline.

Isometric 3D Adversarial Examples in the Physical World

no code implementations27 Oct 2022 Yibo Miao, Yinpeng Dong, Jun Zhu, Xiao-Shan Gao

For naturalness, we constrain the adversarial example to be $\epsilon$-isometric to the original one by adopting the Gaussian curvature as a surrogate metric guaranteed by a theoretical analysis.

Pre-trained Adversarial Perturbations

1 code implementation7 Oct 2022 Yuanhao Ban, Yinpeng Dong

Self-supervised pre-training has drawn increasing attention in recent years due to its superior performance on numerous downstream tasks after fine-tuning.

GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing

no code implementations9 Jun 2022 Zhongkai Hao, Chengyang Ying, Yinpeng Dong, Hang Su, Jun Zhu, Jian Song

Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation.

Kallima: A Clean-label Framework for Textual Backdoor Attacks

no code implementations3 Jun 2022 Xiaoyi Chen, Yinpeng Dong, Zeyu Sun, Shengfang Zhai, Qingni Shen, Zhonghai Wu

Although Deep Neural Network (DNN) has led to unprecedented progress in various natural language processing (NLP) tasks, research shows that deep models are extremely vulnerable to backdoor attacks.

BadDet: Backdoor Attacks on Object Detection

no code implementations28 May 2022 Shih-Han Chan, Yinpeng Dong, Jun Zhu, Xiaolu Zhang, Jun Zhou

We propose four kinds of backdoor attacks for object detection task: 1) Object Generation Attack: a trigger can falsely generate an object of the target class; 2) Regional Misclassification Attack: a trigger can change the prediction of a surrounding object to the target class; 3) Global Misclassification Attack: a single trigger can change the predictions of all objects in an image to the target class; and 4) Object Disappearance Attack: a trigger can make the detector fail to detect the object of the target class.

Autonomous Driving Backdoor Attack +4

Query-Efficient Black-box Adversarial Attacks Guided by a Transfer-based Prior

1 code implementation13 Mar 2022 Yinpeng Dong, Shuyu Cheng, Tianyu Pang, Hang Su, Jun Zhu

However, the existing methods inevitably suffer from low attack success rates or poor query efficiency since it is difficult to estimate the gradient in a high-dimensional input space with limited information.

Controllable Evaluation and Generation of Physical Adversarial Patch on Face Recognition

no code implementations9 Mar 2022 Xiao Yang, Yinpeng Dong, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu

It is therefore imperative to develop a framework that can enable a comprehensive evaluation of the vulnerability of face recognition in the physical world.

3D Face Modelling Face Recognition

Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness

no code implementations13 Oct 2021 Xiao Yang, Yinpeng Dong, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu

The vulnerability of deep neural networks to adversarial examples has motivated an increasing number of defense strategies for promoting model robustness.

Adversarial Robustness

Improving Transferability of Adversarial Patches on Face Recognition with Generative Models

no code implementations CVPR 2021 Zihao Xiao, Xianfeng Gao, Chilin Fu, Yinpeng Dong, Wei Gao, Xiaolu Zhang, Jun Zhou, Jun Zhu

However, deep CNNs are vulnerable to adversarial patches, which are physically realizable and stealthy, raising new security concerns on the real-world applications of these models.

Face Recognition

Accumulative Poisoning Attacks on Real-time Data

1 code implementation NeurIPS 2021 Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu

Collecting training data from untrusted sources exposes machine learning services to poisoning adversaries, who maliciously manipulate training data to degrade the model accuracy.

Federated Learning

Exploring Memorization in Adversarial Training

1 code implementation ICLR 2022 Yinpeng Dong, Ke Xu, Xiao Yang, Tianyu Pang, Zhijie Deng, Hang Su, Jun Zhu

In this paper, we explore the memorization effect in adversarial training (AT) for promoting a deeper understanding of model capacity, convergence, generalization, and especially robust overfitting of the adversarially trained models.

Memorization

Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart

1 code implementation CVPR 2022 Tianyu Pang, Huishuai Zhang, Di He, Yinpeng Dong, Hang Su, Wei Chen, Jun Zhu, Tie-Yan Liu

Along with this routine, we find that confidence and a rectified confidence (R-Con) can form two coupled rejection metrics, which could provably distinguish wrongly classified inputs from correctly classified ones.

Vocal Bursts Valence Prediction

Automated Decision-based Adversarial Attacks

no code implementations9 May 2021 Qi-An Fu, Yinpeng Dong, Hang Su, Jun Zhu

Deep learning models are vulnerable to adversarial examples, which can fool a target classifier by imposing imperceptible perturbations onto natural examples.

Adversarial Attack Program Synthesis

The art of defense: letting networks fool the attacker

1 code implementation7 Apr 2021 Jinlai Zhang, Yinpeng Dong, Binbin Liu, Bo Ouyang, Jihong Zhu, Minchi Kuang, Houqing Wang, Yanmei Meng

To this end, in this paper, we propose a novel adversarial defense for 3D point cloud classifier that makes full use of the nature of the DNNs.

Adversarial Defense

Black-box Detection of Backdoor Attacks with Limited Information and Data

no code implementations ICCV 2021 Yinpeng Dong, Xiao Yang, Zhijie Deng, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu

Although deep neural networks (DNNs) have made rapid progress in recent years, they are vulnerable in adversarial environments.

Understanding and Exploring the Network with Stochastic Architectures

1 code implementation NeurIPS 2020 Zhijie Deng, Yinpeng Dong, Shifeng Zhang, Jun Zhu

In this work, we decouple the training of a network with stochastic architectures (NSA) from NAS and provide a first systematical investigation on it as a stand-alone problem.

Neural Architecture Search

Bag of Tricks for Adversarial Training

2 code implementations ICLR 2021 Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu

Adversarial training (AT) is one of the most effective strategies for promoting model robustness.

Adversarial Robustness Benchmarking

BayesAdapter: Being Bayesian, Inexpensively and Robustly, via Bayesian Fine-tuning

no code implementations28 Sep 2020 Zhijie Deng, Xiao Yang, Hao Zhang, Yinpeng Dong, Jun Zhu

Despite their theoretical appealingness, Bayesian neural networks (BNNs) are falling far behind in terms of adoption in real-world applications compared with normal NNs, mainly due to their limited scalability in training, and low fidelity in their uncertainty estimates.

Uncertainty Quantification Variational Inference

RobFR: Benchmarking Adversarial Robustness on Face Recognition

2 code implementations8 Jul 2020 Xiao Yang, Dingcheng Yang, Yinpeng Dong, Hang Su, Wenjian Yu, Jun Zhu

Based on large-scale evaluations, the commercial FR API services fail to exhibit acceptable performance on robustness evaluation, and we also draw several important conclusions for understanding the adversarial robustness of FR models and providing insights for the design of robust FR models.

Adversarial Robustness Benchmarking +1

Benchmarking Adversarial Robustness on Image Classification

1 code implementation CVPR 2020 Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu

Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning.

Adversarial Attack Adversarial Robustness +4

Towards Face Encryption by Generating Adversarial Identity Masks

1 code implementation ICCV 2021 Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu, Yuefeng Chen, Hui Xue

As billions of personal data being shared through social media and network, the data privacy and security have drawn an increasing attention.

Face Recognition

Boosting Adversarial Training with Hypersphere Embedding

1 code implementation NeurIPS 2020 Tianyu Pang, Xiao Yang, Yinpeng Dong, Kun Xu, Jun Zhu, Hang Su

Adversarial training (AT) is one of the most effective defenses against adversarial attacks for deep learning models.

Representation Learning

Adversarial Distributional Training for Robust Deep Learning

1 code implementation NeurIPS 2020 Yinpeng Dong, Zhijie Deng, Tianyu Pang, Hang Su, Jun Zhu

Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.

Benchmarking Adversarial Robustness

no code implementations26 Dec 2019 Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu

Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning.

Adversarial Attack Adversarial Robustness +2

Improving Black-box Adversarial Attacks with a Transfer-based Prior

2 code implementations NeurIPS 2019 Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu

We consider the black-box adversarial setting, where the adversary has to generate adversarial perturbations without access to the target models to compute gradients.

Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness

2 code implementations ICLR 2020 Tianyu Pang, Kun Xu, Yinpeng Dong, Chao Du, Ning Chen, Jun Zhu

Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e. g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models.

Adversarial Robustness

Efficient Decision-based Black-box Adversarial Attacks on Face Recognition

no code implementations CVPR 2019 Yinpeng Dong, Hang Su, Baoyuan Wu, Zhifeng Li, Wei Liu, Tong Zhang, Jun Zhu

In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model.

Face Recognition

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

2 code implementations CVPR 2019 Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu

In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models.

Translation

Batch Virtual Adversarial Training for Graph Convolutional Networks

no code implementations25 Feb 2019 Zhijie Deng, Yinpeng Dong, Jun Zhu

We present batch virtual adversarial training (BVAT), a novel regularization method for graph convolutional networks (GCNs).

General Classification Node Classification

Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples

no code implementations25 Jan 2019 Yinpeng Dong, Fan Bao, Hang Su, Jun Zhu

3) We propose to improve the consistency of neurons on adversarial example subset by an adversarial training algorithm with a consistent loss.

Composite Binary Decomposition Networks

no code implementations16 Nov 2018 You Qiaoben, Zheng Wang, Jianguo Li, Yinpeng Dong, Yu-Gang Jiang, Jun Zhu

Binary neural networks have great resource and computing efficiency, while suffer from long training procedure and non-negligible accuracy drops, when comparing to the full-precision counterparts.

General Classification Image Classification +3

Adversarial Attacks and Defences Competition

1 code implementation31 Mar 2018 Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, Jian-Yu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, Sangxia Huang, Yao Zhao, Yuzhe Zhao, Zhonglin Han, Junjiajia Long, Yerkebulan Berdibekov, Takuya Akiba, Seiya Tokui, Motoki Abe

To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them.

BIG-bench Machine Learning

Boosting Adversarial Attacks with Momentum

7 code implementations CVPR 2018 Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li

To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks.

Adversarial Attack

Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples

no code implementations18 Aug 2017 Yinpeng Dong, Hang Su, Jun Zhu, Fan Bao

We find that: (1) the neurons in DNNs do not truly detect semantic objects/parts, but respond to objects/parts only as recurrent discriminative patches; (2) deep visual representations are not robust distributed codes of visual concepts because the representations of adversarial images are largely not consistent with those of real images, although they have similar visual appearance, both of which are different from previous findings.

Learning Accurate Low-Bit Deep Neural Networks with Stochastic Quantization

1 code implementation3 Aug 2017 Yinpeng Dong, Renkun Ni, Jianguo Li, Yurong Chen, Jun Zhu, Hang Su

This procedure can greatly compensate the quantization error and thus yield better accuracy for low-bit DNNs.

Quantization

Towards Robust Detection of Adversarial Examples

1 code implementation NeurIPS 2018 Tianyu Pang, Chao Du, Yinpeng Dong, Jun Zhu

Although the recent progress is substantial, deep learning methods can be vulnerable to the maliciously generated adversarial examples.

Improving Interpretability of Deep Neural Networks with Semantic Information

no code implementations CVPR 2017 Yinpeng Dong, Hang Su, Jun Zhu, Bo Zhang

Interpretability of deep neural networks (DNNs) is essential since it enables users to understand the overall strengths and weaknesses of the models, conveys an understanding of how the models will behave in the future, and how to diagnose and correct potential problems.

Action Recognition Temporal Action Localization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.