Search Results for author: Zinan Lin

Found 27 papers, 18 papers with code

Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better

1 code implementation2 Apr 2024 Enshu Liu, Junyi Zhu, Zinan Lin, Xuefei Ning, Matthew B. Blaschko, Sergey Yekhanin, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang

For example, LCSC achieves better performance using 1 number of function evaluation (NFE) than the base model with 2 NFE on consistency distillation, and decreases the NFE of DM from 15 to 9 while maintaining the generation quality on CIFAR-10.

Efficiently Computing Similarities to Private Datasets

no code implementations13 Mar 2024 Arturs Backurs, Zinan Lin, Sepideh Mahabadi, Sandeep Silwal, Jakub Tarnawski

We abstract out this common subroutine and study the following fundamental algorithmic problem: Given a similarity function $f$ and a large high-dimensional private dataset $X \subset \mathbb{R}^d$, output a differentially private (DP) data structure which approximates $\sum_{x \in X} f(x, y)$ for any query $y$.

Density Estimation Dimensionality Reduction

Differentially Private Synthetic Data via Foundation Model APIs 2: Text

1 code implementation4 Mar 2024 Chulin Xie, Zinan Lin, Arturs Backurs, Sivakanth Gopi, Da Yu, Huseyin A Inan, Harsha Nori, Haotian Jiang, Huishuai Zhang, Yin Tat Lee, Bo Li, Sergey Yekhanin

Lin et al. (2024) recently introduced the Private Evolution (PE) algorithm to generate DP synthetic images with only API access to diffusion models.

Privacy Preserving

Mixture-of-Linear-Experts for Long-term Time Series Forecasting

no code implementations11 Dec 2023 Ronghao Ni, Zinan Lin, Shuaiqi Wang, Giulia Fanti

By using MoLE existing linear-centric models can achieve SOTA LTSF results in 68% of the experiments that PatchTST reports and we compare to, whereas existing single-head linear-centric models achieve SOTA results in only 25% of cases.

Time Series Time Series Forecasting

Enhanced Index-Based Feedback Overhead Reduction for WLANs

no code implementations7 Dec 2023 Mrugen Deshmukh, Zinan Lin, Hanqing Lou, Mahmoud Kamel, Rui Yang, Ismail Guvenc

Compressed beamforming algorithm is used in the current Wi-Fi standard to reduce the beamforming feedback overhead (BFO).

Clustering

An Unsupervised Machine Learning Scheme for Index-Based CSI Feedback in Wi-Fi

no code implementations7 Dec 2023 Mrugen Deshmukh, Zinan Lin, Hanqing Lou, Mahmoud Kamel, Rui Yang, Ismail Guvenc

We explore several methods that consider different representations of the data in the candidate set.

Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation

1 code implementation28 Jul 2023 Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang

This work aims at decreasing the end-to-end generation latency of large language models (LLMs).

DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models

no code implementations NeurIPS 2023 Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li

Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly.

Adversarial Robustness Ethics +1

OMS-DPM: Optimizing the Model Schedule for Diffusion Probabilistic Models

1 code implementation15 Jun 2023 Enshu Liu, Xuefei Ning, Zinan Lin, Huazhong Yang, Yu Wang

Diffusion probabilistic models (DPMs) are a new class of generative models that have achieved state-of-the-art generation quality in various domains.

Differentially Private Synthetic Data via Foundation Model APIs 1: Images

1 code implementation24 May 2023 Zinan Lin, Sivakanth Gopi, Janardhan Kulkarni, Harsha Nori, Sergey Yekhanin

We further demonstrate the promise of applying PE on large foundation models such as Stable Diffusion to tackle challenging private datasets with a small number of high-resolution images.

Selective Pre-training for Private Fine-tuning

1 code implementation23 May 2023 Da Yu, Sivakanth Gopi, Janardhan Kulkarni, Zinan Lin, Saurabh Naik, Tomasz Lukasz Religa, Jian Yin, Huishuai Zhang

Besides performance improvements, our framework also shows that with careful pre-training and private fine-tuning, smaller models can match the performance of much larger models that do not have access to private data, highlighting the promise of private learning as a tool for model compression and efficiency.

Model Compression Transfer Learning

Bounding System-Induced Biases in Recommender Systems with A Randomized Dataset

1 code implementation21 Mar 2023 Dugang Liu, Pengxiang Cheng, Zinan Lin, Xiaolian Zhang, Zhenhua Dong, Rui Zhang, Xiuqiang He, Weike Pan, Zhong Ming

To bridge this gap, we study the debiasing problem from a new perspective and propose to directly minimize the upper bound of an ideal objective function, which facilitates a better potential solution to the system-induced biases.

Recommendation Systems

Summary Statistic Privacy in Data Sharing

1 code implementation3 Mar 2023 Zinan Lin, Shuaiqi Wang, Vyas Sekar, Giulia Fanti

We study a setting where a data holder wishes to share data with a receiver, without revealing certain summary statistics of the data distribution (e. g., mean, standard deviation).

Quantization

On the Privacy Properties of GAN-generated Samples

no code implementations3 Jun 2022 Zinan Lin, Vyas Sekar, Giulia Fanti

By drawing connections to the generalization properties of GANs, we prove that under some assumptions, GAN-generated samples inherently satisfy some (weak) privacy guarantees.

RareGAN: Generating Samples for Rare Classes

1 code implementation20 Mar 2022 Zinan Lin, Hao Liang, Giulia Fanti, Vyas Sekar

We study the problem of learning generative adversarial networks (GANs) for a rare class of an unlabeled dataset subject to a labeling budget.

Active Learning

Intelligent Feedback Overhead Reduction (iFOR) in Wi-Fi 7 and Beyond

no code implementations9 Mar 2022 Mrugen Deshmukh, Zinan Lin, Hanqing Lou, Mahmoud Kamel, Rui Yang, Ismail Guvenc

The IEEE 802. 11 standard based wireless local area networks (WLANs) or Wi-Fi networks are critical to provide internet access in today's world.

Pareto GAN: Extending the Representational Power of GANs to Heavy-Tailed Distributions

no code implementations22 Jan 2021 Todd Huster, Jeremy E. J. Cohen, Zinan Lin, Kevin Chan, Charles Kamhoua, Nandi Leslie, Cho-Yu Jason Chiang, Vyas Sekar

A Pareto GAN leverages extreme value theory and the functional properties of neural networks to learn a distribution that matches the asymptotic behavior of the marginal distributions of the features.

Epidemiology Open-Ended Question Answering

MLGO: a Machine Learning Guided Compiler Optimizations Framework

1 code implementation13 Jan 2021 Mircea Trofin, Yundi Qian, Eugene Brevdo, Zinan Lin, Krzysztof Choromanski, David Li

Leveraging machine-learning (ML) techniques for compiler optimizations has been widely studied and explored in academia.

BIG-bench Machine Learning

Why Spectral Normalization Stabilizes GANs: Analysis and Improvements

1 code implementation NeurIPS 2021 Zinan Lin, Vyas Sekar, Giulia Fanti

Spectral normalization (SN) is a widely-used technique for improving the stability and sample quality of Generative Adversarial Networks (GANs).

Using GANs for Sharing Networked Time Series Data: Challenges, Initial Promise, and Open Questions

4 code implementations30 Sep 2019 Zinan Lin, Alankar Jain, Chen Wang, Giulia Fanti, Vyas Sekar

By shedding light on the promise and challenges, we hope our work can rekindle the conversation on workflows for data sharing.

Synthetic Data Generation Time Series +1

InfoGAN-CR and ModelCentrality: Self-supervised Model Training and Selection for Disentangling GANs

1 code implementation14 Jun 2019 Zinan Lin, Kiran Koshy Thekumparampil, Giulia Fanti, Sewoong Oh

Disentangled generative models map a latent code vector to a target space, while enforcing that a subset of the learned latent codes are interpretable and associated with distinct properties of the target distribution.

Disentanglement Model Selection

Robustness of Conditional GANs to Noisy Labels

2 code implementations NeurIPS 2018 Kiran Koshy Thekumparampil, Ashish Khetan, Zinan Lin, Sewoong Oh

When the distribution of the noise is known, we introduce a novel architecture which we call Robust Conditional GAN (RCGAN).

RNN-SM: Fast Steganalysis of VoIP Streams Using Recurrent Neural Network

1 code implementation IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 2018 Zinan Lin, Yongfeng Huang, Jilong Wang

Experiments show that on full embedding rate samples, RNN-SM is of high detection accuracy, which remains over 90% even when the sample is as short as 0. 1 s, and is significantly higher than other state-of-the-art methods.

Quantization Steganalysis

PacGAN: The power of two samples in generative adversarial networks

7 code implementations NeurIPS 2018 Zinan Lin, Ashish Khetan, Giulia Fanti, Sewoong Oh

Generative adversarial networks (GANs) are innovative techniques for learning generative models of complex data distributions from samples.

Two-sample testing Vocal Bursts Valence Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.