Search Results for author: Zhun Deng

Found 31 papers, 7 papers with code

An Economic Solution to Copyright Challenges of Generative AI

no code implementations22 Apr 2024 Jiachen T. Wang, Zhun Deng, Hiroaki Chiba-Okabe, Boaz Barak, Weijie J. Su

Generative artificial intelligence (AI) systems are trained on large data corpora to generate new pieces of text, images, videos, and other media.

Provable Multi-Party Reinforcement Learning with Diverse Human Feedback

no code implementations8 Mar 2024 Huiying Zhong, Zhun Deng, Weijie J. Su, Zhiwei Steven Wu, Linjun Zhang

Our work \textit{initiates} the theoretical study of multi-party RLHF that explicitly models the diverse preferences of multiple individuals.

Fairness Meta-Learning +1

Can AI Be as Creative as Humans?

no code implementations3 Jan 2024 Haonan Wang, James Zou, Michael Mozer, Anirudh Goyal, Alex Lamb, Linjun Zhang, Weijie J Su, Zhun Deng, Michael Qizhe Xie, Hannah Brown, Kenji Kawaguchi

With the rise of advanced generative AI models capable of tasks once reserved for human creativity, the study of AI's creative potential becomes imperative for its responsible development and application.

Learning and Forgetting Unsafe Examples in Large Language Models

no code implementations20 Dec 2023 Jiachen Zhao, Zhun Deng, David Madras, James Zou, Mengye Ren

As the number of large language models (LLMs) released to the public grows, there is a pressing need to understand the safety implications associated with these models learning from third-party custom finetuning data.

Prompt Risk Control: A Rigorous Framework for Responsible Deployment of Large Language Models

1 code implementation22 Nov 2023 Thomas P. Zollo, Todd Morrill, Zhun Deng, Jake C. Snell, Toniann Pitassi, Richard Zemel

The recent explosion in the capabilities of large language models has led to a wave of interest in how best to prompt a model to perform a given task.

Code Generation

Distribution-Free Statistical Dispersion Control for Societal Applications

no code implementations NeurIPS 2023 Zhun Deng, Thomas P. Zollo, Jake C. Snell, Toniann Pitassi, Richard Zemel

Explicit finite-sample statistical guarantees on model performance are an important ingredient in responsible machine learning.

How Does Information Bottleneck Help Deep Learning?

1 code implementation30 May 2023 Kenji Kawaguchi, Zhun Deng, Xu Ji, Jiaoyang Huang

In this paper, we provide the first rigorous learning theory for justifying the benefit of information bottleneck in deep learning by mathematically relating information bottleneck to generalization errors.

Generalization Bounds Learning Theory

Last-Layer Fairness Fine-tuning is Simple and Effective for Neural Networks

2 code implementations8 Apr 2023 Yuzhen Mao, Zhun Deng, Huaxiu Yao, Ting Ye, Kenji Kawaguchi, James Zou

As machine learning has been deployed ubiquitously across applications in modern data science, algorithmic fairness has become a great concern.

Fairness Open-Ended Question Answering +1

HappyMap: A Generalized Multi-calibration Method

no code implementations8 Mar 2023 Zhun Deng, Cynthia Dwork, Linjun Zhang

Fairness is captured by incorporating demographic subgroups into the class of functions~$\mathcal{C}$.

Conformal Prediction Fairness +1

Understanding Multimodal Contrastive Learning and Incorporating Unpaired Data

1 code implementation13 Feb 2023 Ryumei Nakada, Halil Ibrahim Gulluk, Zhun Deng, Wenlong Ji, James Zou, Linjun Zhang

We show that the algorithm can detect the ground-truth pairs and improve performance by fully exploiting unpaired datasets.

Contrastive Learning

Quantile Risk Control: A Flexible Framework for Bounding the Probability of High-Loss Predictions

1 code implementation27 Dec 2022 Jake C. Snell, Thomas P. Zollo, Zhun Deng, Toniann Pitassi, Richard Zemel

In this work, we propose a flexible framework to produce a family of bounds on quantiles of the loss distribution incurred by a predictor.

Reinforcement Learning with Stepwise Fairness Constraints

no code implementations8 Nov 2022 Zhun Deng, He Sun, Zhiwei Steven Wu, Linjun Zhang, David C. Parkes

AI methods are used in societally important settings, ranging from credit to employment to housing, and it is crucial to provide fairness in regard to algorithmic decision making.

Decision Making Fairness +2

Investigating Fairness Disparities in Peer Review: A Language Model Enhanced Approach

1 code implementation7 Nov 2022 Jiayao Zhang, Hongming Zhang, Zhun Deng, Dan Roth

We distill several insights from our analysis on study the peer review process with the help of large LMs.

Fairness Language Modelling +1

Robustness Implies Generalization via Data-Dependent Generalization Bounds

no code implementations27 Jun 2022 Kenji Kawaguchi, Zhun Deng, Kyle Luh, Jiaoyang Huang

This paper proves that robustness implies generalization via data-dependent generalization bounds.

Generalization Bounds

FIFA: Making Fairness More Generalizable in Classifiers Trained on Imbalanced Data

no code implementations6 Jun 2022 Zhun Deng, Jiayao Zhang, Linjun Zhang, Ting Ye, Yates Coley, Weijie J. Su, James Zou

Specifically, FIFA encourages both classification and fairness generalization and can be flexibly combined with many existing fair learning methods with logits-based losses.

Classification Fairness

Scaffolding Sets

no code implementations4 Nov 2021 Maya Burhanpurkar, Zhun Deng, Cynthia Dwork, Linjun Zhang

Predictors map individual instances in a population to the interval $[0, 1]$.

The Power of Contrast for Feature Learning: A Theoretical Analysis

no code implementations6 Oct 2021 Wenlong Ji, Zhun Deng, Ryumei Nakada, James Zou, Linjun Zhang

Contrastive learning has achieved state-of-the-art performance in various self-supervised learning tasks and even outperforms its supervised counterpart.

Contrastive Learning Self-Supervised Learning +1

An Unconstrained Layer-Peeled Perspective on Neural Collapse

no code implementations ICLR 2022 Wenlong Ji, Yiping Lu, Yiliang Zhang, Zhun Deng, Weijie J. Su

We prove that gradient flow on this model converges to critical points of a minimum-norm separation problem exhibiting neural collapse in its global minimizer.

Understanding Dynamics of Nonlinear Representation Learning and Its Application

no code implementations28 Jun 2021 Kenji Kawaguchi, Linjun Zhang, Zhun Deng

Representation learning allows us to automatically discover suitable representations from raw sensory data.

Representation Learning

Adversarial Training Helps Transfer Learning via Better Representations

no code implementations NeurIPS 2021 Zhun Deng, Linjun Zhang, Kailas Vodrahalli, Kenji Kawaguchi, James Zou

Recent works empirically demonstrate that adversarial training in the source data can improve the ability of models to transfer to new domains.

Transfer Learning

How Gradient Descent Separates Data with Neural Collapse: A Layer-Peeled Perspective

no code implementations NeurIPS 2021 Wenlong Ji, Yiping Lu, Yiliang Zhang, Zhun Deng, Weijie J Su

In this paper, we derive a landscape analysis to the surrogate model to study the inductive bias of the neural features and parameters from neural networks with cross-entropy.

Inductive Bias

When and How Mixup Improves Calibration

no code implementations11 Feb 2021 Linjun Zhang, Zhun Deng, Kenji Kawaguchi, James Zou

In addition, we study how Mixup improves calibration in semi-supervised learning.

Data Augmentation

Toward Better Generalization Bounds with Locally Elastic Stability

no code implementations27 Oct 2020 Zhun Deng, Hangfeng He, Weijie J. Su

Given that, we propose \emph{locally elastic stability} as a weaker and distribution-dependent stability notion, which still yields exponential generalization bounds.

Generalization Bounds Learning Theory

Towards Understanding the Dynamics of the First-Order Adversaries

no code implementations ICML 2020 Zhun Deng, Hangfeng He, Jiaoyang Huang, Weijie J. Su

An acknowledged weakness of neural networks is their vulnerability to adversarial perturbations to the inputs.

How Does Mixup Help With Robustness and Generalization?

no code implementations ICLR 2021 Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou

For robustness, we show that minimizing the Mixup loss corresponds to approximately minimizing an upper bound of the adversarial loss.

Data Augmentation

Interpreting Robust Optimization via Adversarial Influence Functions

no code implementations ICML 2020 Zhun Deng, Cynthia Dwork, Jialiang Wang, Linjun Zhang

Robust optimization has been widely used in nowadays data science, especially in adversarial training.

Decision-Aware Conditional GANs for Time Series Data

no code implementations26 Sep 2020 He Sun, Zhun Deng, Hui Chen, David C. Parkes

We introduce the decision-aware time-series conditional generative adversarial network (DAT-CGAN) as a method for time-series generation.

Generative Adversarial Network Time Series +2

Representation via Representations: Domain Generalization via Adversarially Learned Invariant Representations

no code implementations20 Jun 2020 Zhun Deng, Frances Ding, Cynthia Dwork, Rachel Hong, Giovanni Parmigiani, Prasad Patil, Pragya Sur

We study an adversarial loss function for $k$ domains and precisely characterize its limiting behavior as $k$ grows, formalizing and proving the intuition, backed by experiments, that observing data from a larger number of domains helps.

Domain Generalization Fairness

Improving Adversarial Robustness via Unlabeled Out-of-Domain Data

no code implementations15 Jun 2020 Zhun Deng, Linjun Zhang, Amirata Ghorbani, James Zou

In this work, we investigate how adversarial robustness can be enhanced by leveraging out-of-domain unlabeled data.

Adversarial Robustness Data Augmentation +2

Architecture Selection via the Trade-off Between Accuracy and Robustness

no code implementations4 Jun 2019 Zhun Deng, Cynthia Dwork, Jialiang Wang, Yao Zhao

We provide a general framework for characterizing the trade-off between accuracy and robustness in supervised learning.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.