Search Results for author: Yuxin Wen

Found 24 papers, 15 papers with code

Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization

no code implementations28 Mar 2024 Yuhang Li, Xin Dong, Chen Chen, Jingtao Li, Yuxin Wen, Michael Spranger, Lingjuan Lyu

Synthetic image data generation represents a promising avenue for training deep learning models, particularly in the realm of transfer learning, where obtaining real images within a specific domain can be prohibitively expensive due to privacy and intellectual property considerations.

Transfer Learning

Coercing LLMs to do and reveal (almost) anything

1 code implementation21 Feb 2024 Jonas Geiping, Alex Stein, Manli Shu, Khalid Saifullah, Yuxin Wen, Tom Goldstein

It has recently been shown that adversarial attacks on large language models (LLMs) can "jailbreak" the model into making harmful statements.

Benchmarking the Robustness of Image Watermarks

1 code implementation16 Jan 2024 Bang An, Mucong Ding, Tahseen Rabbani, Aakriti Agrawal, Yuancheng Xu, ChengHao Deng, Sicheng Zhu, Abdirisak Mohamed, Yuxin Wen, Tom Goldstein, Furong Huang

We present WAVES (Watermark Analysis Via Enhanced Stress-testing), a novel benchmark for assessing watermark robustness, overcoming the limitations of current evaluation methods. WAVES integrates detection and identification tasks, and establishes a standardized evaluation protocol comprised of a diverse range of stress tests.

Benchmarking

Baseline Defenses for Adversarial Attacks Against Aligned Language Models

1 code implementation1 Sep 2023 Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping-Yeh Chiang, Micah Goldblum, Aniruddha Saha, Jonas Geiping, Tom Goldstein

We find that the weakness of existing discrete optimizers for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs.

Seeing in Words: Learning to Classify through Language Bottlenecks

no code implementations29 Jun 2023 Khalid Saifullah, Yuxin Wen, Jonas Geiping, Micah Goldblum, Tom Goldstein

Neural networks for computer vision extract uninterpretable features despite achieving high accuracy on benchmarks.

Bring Your Own Data! Self-Supervised Evaluation for Large Language Models

1 code implementation23 Jun 2023 Neel Jain, Khalid Saifullah, Yuxin Wen, John Kirchenbauer, Manli Shu, Aniruddha Saha, Micah Goldblum, Jonas Geiping, Tom Goldstein

With the rise of Large Language Models (LLMs) and their ubiquitous deployment in diverse domains, measuring language model behavior on realistic data is imperative.

Chatbot Language Modelling

On the Reliability of Watermarks for Large Language Models

1 code implementation7 Jun 2023 John Kirchenbauer, Jonas Geiping, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, Tom Goldstein

We also consider a range of new detection schemes that are sensitive to short spans of watermarked text embedded inside a large document, and we compare the robustness of watermarking to other kinds of detectors.

Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery

2 code implementations NeurIPS 2023 Yuxin Wen, Neel Jain, John Kirchenbauer, Micah Goldblum, Jonas Geiping, Tom Goldstein

In the text-to-image setting, the method creates hard prompts for diffusion models, allowing API users to easily generate, discover, and mix and match image concepts without prior knowledge on how to prompt the model.

A Watermark for Large Language Models

7 code implementations24 Jan 2023 John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein

Potential harms of large language models can be mitigated by watermarking model output, i. e., embedding signals into generated text that are invisible to humans but algorithmically detectable from a short span of tokens.

Language Modelling

Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries

1 code implementation19 Oct 2022 Yuxin Wen, Arpit Bansal, Hamid Kazemi, Eitan Borgnia, Micah Goldblum, Jonas Geiping, Tom Goldstein

As industrial applications are increasingly automated by machine learning models, enforcing personal data ownership and intellectual property rights requires tracing training data back to their rightful owners.

Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning

1 code implementation17 Oct 2022 Yuxin Wen, Jonas Geiping, Liam Fowl, Hossein Souri, Rama Chellappa, Micah Goldblum, Tom Goldstein

Federated learning is particularly susceptible to model poisoning and backdoor attacks because individual users have direct control over the training data and model updates.

Federated Learning Image Classification +2

Surface Reconstruction from Point Clouds: A Survey and a Benchmark

no code implementations5 May 2022 Zhangjin Huang, Yuxin Wen, ZiHao Wang, Jinjuan Ren, Kui Jia

For example, while deep learning methods are increasingly popular, our systematic studies suggest that, surprisingly, a few classical methods perform even better in terms of both robustness and generalization; our studies also suggest that the practical challenges of misalignment of point sets from multi-view scanning, missing of surface points, and point outliers remain unsolved by all the existing surface reconstruction methods.

Benchmarking Surface Reconstruction

Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification

1 code implementation1 Feb 2022 Yuxin Wen, Jonas Geiping, Liam Fowl, Micah Goldblum, Tom Goldstein

Federated learning (FL) has rapidly risen in popularity due to its promise of privacy and efficiency.

Federated Learning

Machine Learning based Medical Image Deepfake Detection: A Comparative Study

no code implementations27 Sep 2021 Siddharth Solaiyappan, Yuxin Wen

Deep generative networks in recent years have reinforced the need for caution while consuming various modalities of digital information.

BIG-bench Machine Learning DeepFake Detection +1

Sign-Agnostic Implicit Learning of Surface Self-Similarities for Shape Modeling and Reconstruction from Raw Point Clouds

no code implementations CVPR 2021 Wenbin Zhao, Jiabao Lei, Yuxin Wen, JianGuo Zhang, Kui Jia

Motivated from a universal phenomenon that self-similar shape patterns of local surface patches repeat across the entire surface of an object, we aim to push forward the data-driven strategies and propose to learn a local implicit surface network for a shared, adaptive modeling of the entire surface for a direct surface reconstruction from raw point cloud; we also enhance the leveraging of surface self-similarities by improving correlations among the optimized latent codes of individual surface patches.

Surface Reconstruction

Deep Optimized Priors for 3D Shape Modeling and Reconstruction

no code implementations CVPR 2021 Mingyue Yang, Yuxin Wen, Weikai Chen, Yongwei Chen, Kui Jia

Many learning-based approaches have difficulty scaling to unseen data, as the generality of its learned prior is limited to the scale and variations of the training samples.

3D Shape Modeling

Towards Understanding the Regularization of Adversarial Robustness on Neural Networks

no code implementations ICML 2020 Yuxin Wen, Shuai Li, Kui Jia

However, it is observed that such methods would lead to standard performance degradation, i. e., the degradation on natural examples.

Adversarial Robustness

Geometry-Aware Generation of Adversarial Point Clouds

2 code implementations24 Dec 2019 Yuxin Wen, Jiehong Lin, Ke Chen, C. L. Philip Chen, Kui Jia

Regularizing the targeted attack loss with our proposed geometry-aware objectives results in our proposed method, Geometry-Aware Adversarial Attack ($GeoA^3$).

Adversarial Attack Fairness

Geometry-aware Generation of Adversarial and Cooperative Point Clouds

no code implementations25 Sep 2019 Yuxin Wen, Jiehong Lin, Ke Chen, Kui Jia

Recent studies show that machine learning models are vulnerable to adversarial examples.

Fairness Object

Orthogonal Deep Neural Networks

1 code implementation15 May 2019 Kui Jia, Shuai Li, Yuxin Wen, Tongliang Liu, DaCheng Tao

To this end, we first prove that DNNs are of local isometry on data distributions of practical interest; by using a new covering of the sample space and introducing the local isometry property of DNNs into generalization analysis, we establish a new generalization error bound that is both scale- and range-sensitive to singular value spectrum of each of networks' weight matrices.

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.