Search Results for author: Chaowei Xiao

Found 84 papers, 41 papers with code

JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks

no code implementations3 Apr 2024 Weidi Luo, Siyuan Ma, Xiaogeng Liu, XIAOYU GUO, Chaowei Xiao

With the rapid advancements in Multimodal Large Language Models (MLLMs), securing these models against malicious inputs while aligning them with human values has emerged as a critical challenge.

Don't Listen To Me: Understanding and Exploring Jailbreak Prompts of Large Language Models

no code implementations26 Mar 2024 Zhiyuan Yu, Xiaogeng Liu, Shunning Liang, Zach Cameron, Chaowei Xiao, Ning Zhang

Building on the insights from the user study, we also developed a system using AI as the assistant to automate the process of jailbreak prompt generation.

AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting

1 code implementation14 Mar 2024 Yu Wang, Xiaogeng Liu, Yu Li, Muhao Chen, Chaowei Xiao

However, with the integration of additional modalities, MLLMs are exposed to new vulnerabilities, rendering them prone to structured-based jailbreak attacks, where semantic content (e. g., "harmful text") has been injected into the images to mislead MLLMs.

Automatic and Universal Prompt Injection Attacks against Large Language Models

1 code implementation7 Mar 2024 Xiaogeng Liu, Zhiyuan Yu, Yizhe Zhang, Ning Zhang, Chaowei Xiao

Large Language Models (LLMs) excel in processing and generating human language, powered by their ability to interpret and follow instructions.

A New Era in LLM Security: Exploring Security Concerns in Real-World LLM-based Systems

no code implementations28 Feb 2024 Fangzhou Wu, Ning Zhang, Somesh Jha, Patrick McDaniel, Chaowei Xiao

Large Language Model (LLM) systems are inherently compositional, with individual LLM serving as the core foundation with additional layers of objects such as plugins, sandbox, and so on.

Language Modelling Large Language Model

WIPI: A New Web Threat for LLM-Driven Web Agents

no code implementations26 Feb 2024 Fangzhou Wu, Shutong Wu, Yulong Cao, Chaowei Xiao

To evaluate the effectiveness of the proposed methodology, we conducted extensive experiments using 7 plugin-based ChatGPT Web Agents, 8 Web GPTs, and 3 different open-source Web Agents.

Mitigating Fine-tuning Jailbreak Attack with Backdoor Enhanced Alignment

no code implementations22 Feb 2024 Jiongxiao Wang, Jiazhao Li, Yiquan Li, Xiangyu Qi, Junjie Hu, Yixuan Li, Patrick McDaniel, Muhao Chen, Bo Li, Chaowei Xiao

Despite the general capabilities of Large Language Models (LLMs) like GPT-4 and Llama-2, these models still request fine-tuning or adaptation with customized data when it comes to meeting the specific business demands and intricacies of tailored use cases.

T-Stitch: Accelerating Sampling in Pre-Trained Diffusion Models with Trajectory Stitching

1 code implementation21 Feb 2024 Zizheng Pan, Bohan Zhuang, De-An Huang, Weili Nie, Zhiding Yu, Chaowei Xiao, Jianfei Cai, Anima Anandkumar

Sampling from diffusion probabilistic models (DPMs) is often expensive for high-quality image generation and typically requires many steps with a large model.

Image Generation

A Trembling House of Cards? Mapping Adversarial Attacks against Language Agents

1 code implementation15 Feb 2024 Lingbo Mo, Zeyi Liao, Boyuan Zheng, Yu Su, Chaowei Xiao, Huan Sun

There is a surprisingly large gap between the speed and scale of their development and deployment and our understanding of their safety risks.

Preference Poisoning Attacks on Reward Model Learning

no code implementations2 Feb 2024 Junlin Wu, Jiongxiao Wang, Chaowei Xiao, Chenguang Wang, Ning Zhang, Yevgeniy Vorobeychik

In addition, we observe that the simpler and more scalable rank-by-distance approaches are often competitive with the best, and on occasion significantly outperform gradient-based methods.

Instructional Fingerprinting of Large Language Models

1 code implementation21 Jan 2024 Jiashu Xu, Fei Wang, Mingyu Derek Ma, Pang Wei Koh, Chaowei Xiao, Muhao Chen

The exorbitant cost of training Large language models (LLMs) from scratch makes it essential to fingerprint the models to protect intellectual property via ownership authentication and to ensure downstream users and developers comply with their license terms (e. g. restricting commercial use).

RealGen: Retrieval Augmented Generation for Controllable Traffic Scenarios

no code implementations19 Dec 2023 Wenhao Ding, Yulong Cao, Ding Zhao, Chaowei Xiao, Marco Pavone

Simulation plays a crucial role in the development of autonomous vehicles (AVs) due to the potential risks associated with real-world testing.

Autonomous Vehicles In-Context Learning +1

DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions

no code implementations7 Dec 2023 Fangzhou Wu, Xiaogeng Liu, Chaowei Xiao

In this paper, we introduce DeceptPrompt, a novel algorithm that can generate adversarial natural language instructions that drive the Code LLMs to generate functionality correct code with vulnerabilities.

Code Generation

Dolphins: Multimodal Language Model for Driving

no code implementations1 Dec 2023 Yingzi Ma, Yulong Cao, Jiachen Sun, Marco Pavone, Chaowei Xiao

The quest for fully autonomous vehicles (AVs) capable of navigating complex real-world scenarios with human-like understanding and responsiveness.

Autonomous Vehicles In-Context Learning +1

On the Exploitability of Reinforcement Learning with Human Feedback for Large Language Models

no code implementations16 Nov 2023 Jiongxiao Wang, Junlin Wu, Muhao Chen, Yevgeniy Vorobeychik, Chaowei Xiao

Reinforcement Learning with Human Feedback (RLHF) is a methodology designed to align Large Language Models (LLMs) with human preferences, playing an important role in LLMs alignment.

Backdoor Attack Data Poisoning

Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking

no code implementations16 Nov 2023 Nan Xu, Fei Wang, Ben Zhou, Bang Zheng Li, Chaowei Xiao, Muhao Chen

While large language models (LLMs) have demonstrated increasing power, they have also given rise to a wide range of harmful behaviors.

Test-time Backdoor Mitigation for Black-Box Large Language Models with Defensive Demonstrations

no code implementations16 Nov 2023 Wenjie Mo, Jiashu Xu, Qin Liu, Jiongxiao Wang, Jun Yan, Chaowei Xiao, Muhao Chen

Existing studies in backdoor defense have predominantly focused on the training phase, overlooking the critical aspect of testing time defense.

backdoor defense

HiCL: Hierarchical Contrastive Learning of Unsupervised Sentence Embeddings

no code implementations15 Oct 2023 Zhuofeng Wu, Chaowei Xiao, VG Vinod Vydiswaran

In this paper, we propose a hierarchical contrastive learning framework, HiCL, which considers local segment-level and global sequence-level relationships to improve training efficiency and effectiveness.

Contrastive Learning Sentence +2

Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation

no code implementations11 Oct 2023 Haizhong Zheng, Jiachen Sun, Shutong Wu, Bhavya Kailkhura, Zhuoqing Mao, Chaowei Xiao, Atul Prakash

In this paper, we recognize that images share common features in a hierarchical way due to the inherent hierarchical structure of the classification system, which is overlooked by current data parameterization methods.

Dataset Condensation

DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies

no code implementations6 Oct 2023 Shuaiwen Leon Song, Bonnie Kruft, Minjia Zhang, Conglong Li, Shiyang Chen, Chengming Zhang, Masahiro Tanaka, Xiaoxia Wu, Jeff Rasley, Ammar Ahmad Awan, Connor Holmes, Martin Cai, Adam Ghanem, Zhongzhu Zhou, Yuxiong He, Pete Luferenko, Divya Kumar, Jonathan Weyn, Ruixiong Zhang, Sylwester Klocek, Volodymyr Vragov, Mohammed AlQuraishi, Gustaf Ahdritz, Christina Floristean, Cristina Negri, Rao Kotamarthi, Venkatram Vishwanath, Arvind Ramanathan, Sam Foreman, Kyle Hippe, Troy Arcomano, Romit Maulik, Maxim Zvyagin, Alexander Brace, Bin Zhang, Cindy Orozco Bohorquez, Austin Clyde, Bharat Kale, Danilo Perez-Rivera, Heng Ma, Carla M. Mann, Michael Irvin, J. Gregory Pauloski, Logan Ward, Valerie Hayot, Murali Emani, Zhen Xie, Diangen Lin, Maulik Shukla, Ian Foster, James J. Davis, Michael E. Papka, Thomas Brettin, Prasanna Balaprakash, Gina Tourassi, John Gounley, Heidi Hanson, Thomas E Potok, Massimiliano Lupo Pasini, Kate Evans, Dan Lu, Dalton Lunga, Junqi Yin, Sajal Dash, Feiyi Wang, Mallikarjun Shankar, Isaac Lyngaas, Xiao Wang, Guojing Cong, Pei Zhang, Ming Fan, Siyan Liu, Adolfy Hoisie, Shinjae Yoo, Yihui Ren, William Tang, Kyle Felker, Alexey Svyatkovskiy, Hang Liu, Ashwin Aji, Angela Dalton, Michael Schulte, Karl Schulz, Yuntian Deng, Weili Nie, Josh Romero, Christian Dallago, Arash Vahdat, Chaowei Xiao, Thomas Gibbs, Anima Anandkumar, Rick Stevens

In the upcoming decade, deep learning may revolutionize the natural sciences, enhancing our capacity to model and predict natural occurrences.

CSI: Enhancing the Robustness of 3D Point Cloud Recognition against Corruption

1 code implementation5 Oct 2023 Zhuoyuan Wu, Jiachen Sun, Chaowei Xiao

In this study, we harness the inherent set property of point cloud data to introduce a novel critical subset identification (CSI) method, aiming to bolster recognition robustness in the face of data corruption.

AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models

2 code implementations3 Oct 2023 Xiaogeng Liu, Nan Xu, Muhao Chen, Chaowei Xiao

In light of these challenges, we intend to answer this question: Can we develop an approach that can automatically generate stealthy jailbreak prompts?

Decision Making

Semantic Adversarial Attacks via Diffusion Models

1 code implementation14 Sep 2023 Chenan Wang, Jinhao Duan, Chaowei Xiao, Edward Kim, Matthew Stamm, Kaidi Xu

Then there are two variants of this framework: 1) the Semantic Transformation (ST) approach fine-tunes the latent space of the generated image and/or the diffusion model itself; 2) the Latent Masking (LM) approach masks the latent space with another target image and local backpropagation-based interpretation methods.

Adversarial Attack

Reinforcement Learning with Human Feedback for Realistic Traffic Simulation

no code implementations1 Sep 2023 Yulong Cao, Boris Ivanovic, Chaowei Xiao, Marco Pavone

This works aims to address this by developing a framework that employs reinforcement learning with human preference (RLHF) to enhance the realism of existing traffic models.

reinforcement-learning

DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing

1 code implementation28 Aug 2023 Jiawei Zhang, Zhongzhu Chen, huan zhang, Chaowei Xiao, Bo Li

Diffusion models have been leveraged to perform adversarial purification and thus provide both empirical and certified robustness for a standard model.

Denoising

On the Exploitability of Instruction Tuning

1 code implementation NeurIPS 2023 Manli Shu, Jiongxiao Wang, Chen Zhu, Jonas Geiping, Chaowei Xiao, Tom Goldstein

In this work, we investigate how an adversary can exploit instruction tuning by injecting specific instruction-following examples into the training data that intentionally changes the model's behavior.

Data Poisoning Instruction Following

Differentially Private Video Activity Recognition

no code implementations27 Jun 2023 Zelun Luo, Yuliang Zou, Yijin Yang, Zane Durante, De-An Huang, Zhiding Yu, Chaowei Xiao, Li Fei-Fei, Animashree Anandkumar

In recent years, differential privacy has seen significant advancements in image classification; however, its application to video activity recognition remains under-explored.

Activity Recognition Classification +2

CALICO: Self-Supervised Camera-LiDAR Contrastive Pre-training for BEV Perception

no code implementations1 Jun 2023 Jiachen Sun, Haizhong Zheng, Qingzhao Zhang, Atul Prakash, Z. Morley Mao, Chaowei Xiao

CALICO's efficacy is substantiated by extensive evaluations on 3D object detection and BEV map segmentation tasks, where it delivers significant performance improvements.

3D Object Detection Autonomous Driving +3

Voyager: An Open-Ended Embodied Agent with Large Language Models

1 code implementation25 May 2023 Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, Anima Anandkumar

We introduce Voyager, the first LLM-powered embodied lifelong learning agent in Minecraft that continuously explores the world, acquires diverse skills, and makes novel discoveries without human intervention.

Adversarial Demonstration Attacks on Large Language Models

no code implementations24 May 2023 Jiongxiao Wang, Zichen Liu, Keun Hee Park, Zhuojun Jiang, Zhaoheng Zheng, Zhuofeng Wu, Muhao Chen, Chaowei Xiao

We propose a novel attack method named advICL, which aims to manipulate only the demonstration without changing the input to mislead the models.

In-Context Learning

Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models

no code implementations24 May 2023 Jiashu Xu, Mingyu Derek Ma, Fei Wang, Chaowei Xiao, Muhao Chen

We investigate security concerns of the emergent instruction tuning paradigm, that models are trained on crowdsourced datasets with task instructions to achieve superior performance.

Continual Learning Data Poisoning

From Shortcuts to Triggers: Backdoor Defense with Denoised PoE

1 code implementation24 May 2023 Qin Liu, Fei Wang, Chaowei Xiao, Muhao Chen

Language models are often at risk of diverse backdoor attacks, especially data poisoning.

backdoor defense Data Poisoning +3

Defending against Insertion-based Textual Backdoor Attacks via Attribution

1 code implementation3 May 2023 Jiazhao Li, Zhuofeng Wu, Wei Ping, Chaowei Xiao, V. G. Vinod Vydiswaran

Textual backdoor attack, as a novel attack model, has been shown to be effective in adding a backdoor to the model during training.

Backdoor Attack Language Modelling

ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger

no code implementations27 Apr 2023 Jiazhao Li, Yijin Yang, Zhuofeng Wu, V. G. Vinod Vydiswaran, Chaowei Xiao

Textual backdoor attacks pose a practical threat to existing systems, as they can compromise the model by inserting imperceptible triggers into inputs and manipulating labels in the training dataset.

Backdoor Attack

Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling

1 code implementation30 Mar 2023 Ethan Wisdom, Tejas Gokhale, Chaowei Xiao, Yezhou Yang

In this work, we present a data poisoning attack that confounds machine learning models without any manipulation of the image or label.

Continual Learning Data Poisoning +1

Defending against Adversarial Audio via Diffusion Model

1 code implementation2 Mar 2023 Shutong Wu, Jiongxiao Wang, Wei Ping, Weili Nie, Chaowei Xiao

In this paper, we propose an adversarial purification-based defense pipeline, AudioPure, for acoustic systems via off-the-shelf diffusion models.

PerAda: Parameter-Efficient Federated Learning Personalization with Generalization Guarantees

no code implementations13 Feb 2023 Chulin Xie, De-An Huang, Wenda Chu, Daguang Xu, Chaowei Xiao, Bo Li, Anima Anandkumar

In this paper, we propose PerAda, a parameter-efficient pFL framework that reduces communication and computational costs and exhibits superior generalization performance, especially under test-time distribution shifts.

Generalization Bounds Knowledge Distillation +2

Multi-modal Molecule Structure-text Model for Text-based Retrieval and Editing

1 code implementation21 Dec 2022 Shengchao Liu, Weili Nie, Chengpeng Wang, Jiarui Lu, Zhuoran Qiao, Ling Liu, Jian Tang, Chaowei Xiao, Anima Anandkumar

Here we present a multi-modal molecule structure-text model, MoleculeSTM, by jointly learning molecules' chemical structures and textual descriptions via a contrastive learning strategy.

Contrastive Learning Drug Discovery +2

DensePure: Understanding Diffusion Models towards Adversarial Robustness

no code implementations1 Nov 2022 Chaowei Xiao, Zhongzhu Chen, Kun Jin, Jiongxiao Wang, Weili Nie, Mingyan Liu, Anima Anandkumar, Bo Li, Dawn Song

By using the highest density point in the conditional distribution as the reversed sample, we identify the robust region of a given instance under the diffusion model's reverse process.

Adversarial Robustness Denoising

Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models

2 code implementations15 Sep 2022 Manli Shu, Weili Nie, De-An Huang, Zhiding Yu, Tom Goldstein, Anima Anandkumar, Chaowei Xiao

In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data.

Image Classification Zero-shot Generalization

Retrieval-based Controllable Molecule Generation

1 code implementation23 Aug 2022 Zichao Wang, Weili Nie, Zhuoran Qiao, Chaowei Xiao, Richard Baraniuk, Anima Anandkumar

On various tasks ranging from simple design criteria to a challenging real-world scenario for designing lead compounds that bind to the SARS-CoV-2 main protease, we demonstrate our approach extrapolates well beyond the retrieval database, and achieves better performance and wider applicability than previous methods.

Drug Discovery Retrieval

PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D Point Cloud Recognition

no code implementations21 Aug 2022 Jiachen Sun, Weili Nie, Zhiding Yu, Z. Morley Mao, Chaowei Xiao

3D Point cloud is becoming a critical data representation in many real-world applications like autonomous driving, robotics, and medical imaging.

Autonomous Driving

Robust Trajectory Prediction against Adversarial Attacks

no code implementations29 Jul 2022 Yulong Cao, Danfei Xu, Xinshuo Weng, Zhuoqing Mao, Anima Anandkumar, Chaowei Xiao, Marco Pavone

We demonstrate that our method is able to improve the performance by 46% on adversarial data and at the cost of only 3% performance degradation on clean data, compared to the model trained with clean data.

Autonomous Driving Data Augmentation +1

SecretGen: Privacy Recovery on Pre-Trained Models via Distribution Discrimination

1 code implementation25 Jul 2022 Zhuowen Yuan, Fan Wu, Yunhui Long, Chaowei Xiao, Bo Li

We first explore different statistical information which can discriminate the private training distribution from other distributions.

Model Selection Transfer Learning

Diffusion Models for Adversarial Purification

2 code implementations16 May 2022 Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, Anima Anandkumar

Adversarial purification refers to a class of defense methods that remove adversarial perturbations using a generative model.

Understanding The Robustness in Vision Transformers

2 code implementations26 Apr 2022 Daquan Zhou, Zhiding Yu, Enze Xie, Chaowei Xiao, Anima Anandkumar, Jiashi Feng, Jose M. Alvarez

Our study is motivated by the intriguing properties of the emerging visual grouping in Vision Transformers, which indicates that self-attention may promote robustness through improved mid-level representations.

Ranked #4 on Domain Generalization on ImageNet-R (using extra training data)

Domain Generalization Image Classification +3

RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning

1 code implementation ICLR 2022 Xiaojian Ma, Weili Nie, Zhiding Yu, Huaizu Jiang, Chaowei Xiao, Yuke Zhu, Song-Chun Zhu, Anima Anandkumar

This task remains challenging for current deep learning algorithms since it requires addressing three key technical problems jointly: 1) identifying object entities and their properties, 2) inferring semantic relations between pairs of entities, and 3) generalizing to novel object-relation combinations, i. e., systematic generalization.

Human-Object Interaction Detection Object +5

Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions

no code implementations NeurIPS 2021 Jiachen Sun, Yulong Cao, Christopher B. Choy, Zhiding Yu, Anima Anandkumar, Zhuoqing Morley Mao, Chaowei Xiao

In this paper, we systematically study the impact of various self-supervised learning proxy tasks on different architectures and threat models for 3D point clouds with adversarial training.

Adversarial Robustness Autonomous Driving +1

Auditing AI models for Verified Deployment under Semantic Specifications

no code implementations25 Sep 2021 Homanga Bharadhwaj, De-An Huang, Chaowei Xiao, Anima Anandkumar, Animesh Garg

We enable such unit tests through variations in a semantically-interpretable latent space of a generative model.

Face Recognition

Long-Short Transformer: Efficient Transformers for Language and Vision

3 code implementations NeurIPS 2021 Chen Zhu, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, Bryan Catanzaro

For instance, Transformer-LS achieves 0. 97 test BPC on enwik8 using half the number of parameters than previous method, while being faster and is able to handle 3x as long sequences compared to its full-attention version on the same hardware.

Language Modelling

Taxonomy of Machine Learning Safety: A Survey and Primer

no code implementations9 Jun 2021 Sina Mohseni, Haotao Wang, Zhiding Yu, Chaowei Xiao, Zhangyang Wang, Jay Yadawa

The open-world deployment of Machine Learning (ML) algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities such as interpretability, verifiability, and performance limitations.

Autonomous Vehicles BIG-bench Machine Learning +1

Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations

4 code implementations NeurIPS 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, Cho-Jui Hsieh

Several works have shown this vulnerability via adversarial attacks, but existing approaches on improving the robustness of DRL under this setting have limited success and lack for theoretical principles.

reinforcement-learning Reinforcement Learning (RL)

AdvIT: Adversarial Frames Identifier Based on Temporal Consistency in Videos

no code implementations ICCV 2019 Chaowei Xiao, Ruizhi Deng, Bo Li, Taesung Lee, Benjamin Edwards, Jinfeng Yi, Dawn Song, Mingyan Liu, Ian Molloy

In particular, we apply optical flow estimation to the target and previous frames to generate pseudo frames and evaluate the consistency of the learner output between these pseudo frames and target.

Action Recognition Autonomous Driving +7

Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

no code implementations16 Jul 2019 Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, Z. Morley Mao

In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored.

Autonomous Driving BIG-bench Machine Learning +2

Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

no code implementations11 Jul 2019 Yulong Cao, Chaowei Xiao, Dawei Yang, Jing Fang, Ruigang Yang, Mingyan Liu, Bo Li

Deep neural networks (DNNs) are found to be vulnerable against adversarial examples, which are carefully crafted inputs with a small magnitude of perturbation aiming to induce arbitrarily incorrect predictions.

Autonomous Driving

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing

1 code implementation19 Jun 2019 Haonan Qiu, Chaowei Xiao, Lei Yang, Xinchen Yan, Honglak Lee, Bo Li

In this paper, we aim to explore the impact of semantic manipulation on DNNs predictions by manipulating the semantic attributes of images and generate "unrestricted adversarial examples".

Attribute Face Recognition +1

Towards Stable and Efficient Training of Verifiably Robust Neural Networks

2 code implementations ICLR 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, Cho-Jui Hsieh

In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass.

Application-driven Privacy-preserving Data Publishing with Correlated Attributes

no code implementations26 Dec 2018 Aria Rezaei, Chaowei Xiao, Jie Gao, Bo Li, Sirajum Munir

To address the privacy concerns of users in this environment, we propose a novel framework called PR-GAN that offers privacy-preserving mechanism using generative adversarial networks.

Privacy Preserving

Data Poisoning Attack against Unsupervised Node Embedding Methods

no code implementations30 Oct 2018 Mingjie Sun, Jian Tang, Huichen Li, Bo Li, Chaowei Xiao, Yao Chen, Dawn Song

In this paper, we take the task of link prediction as an example, which is one of the most fundamental problems for graph analysis, and introduce a data positioning attack to node embedding methods.

Data Poisoning Link Prediction

MeshAdv: Adversarial Meshes for Visual Recognition

no code implementations CVPR 2019 Chaowei Xiao, Dawei Yang, Bo Li, Jia Deng, Mingyan Liu

Highly expressive models such as deep neural networks (DNNs) have been widely applied to various applications.

Robust Physical-World Attacks on Deep Learning Visual Classification

no code implementations CVPR 2018 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song

Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input.

Classification General Classification

Performing Co-Membership Attacks Against Deep Generative Models

no code implementations24 May 2018 Kin Sum Liu, Chaowei Xiao, Bo Li, Jie Gao

We conduct extensive experiments on a variety of datasets and generative models showing that: our attacker network outperforms prior membership attacks; co-membership attacks can be substantially more powerful than single attacks; and VAEs are more susceptible to membership attacks compared to GANs.

Spatially Transformed Adversarial Examples

3 code implementations ICLR 2018 Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, Dawn Song

Perturbations generated through spatial transformation could result in large $\mathcal{L}_p$ distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems.

Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features

no code implementations28 Aug 2017 Liang Tong, Bo Li, Chen Hajaj, Chaowei Xiao, Ning Zhang, Yevgeniy Vorobeychik

A conventional approach to evaluate ML robustness to such attacks, as well as to design robust ML, is by considering simplified feature-space models of attacks, where the attacker changes ML features directly to effect evasion, while minimizing or constraining the magnitude of this change.

Intrusion Detection Malware Detection

Robust Physical-World Attacks on Deep Learning Models

1 code implementation27 Jul 2017 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song

We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions.

Cannot find the paper you are looking for? You can Submit a new open access paper.