1 code implementation • ICML 2020 • Zinan Lin, Kiran Thekumparampil, Giulia Fanti, Sewoong Oh
This contrastive regularizer is inspired by a natural notion of disentanglement: latent traversal.
1 code implementation • 2 Apr 2024 • Enshu Liu, Junyi Zhu, Zinan Lin, Xuefei Ning, Matthew B. Blaschko, Sergey Yekhanin, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang
For example, LCSC achieves better performance using 1 number of function evaluation (NFE) than the base model with 2 NFE on consistency distillation, and decreases the NFE of DM from 15 to 9 while maintaining the generation quality on CIFAR-10.
no code implementations • 25 Mar 2024 • Lin Zhao, Tianchen Zhao, Zinan Lin, Xuefei Ning, Guohao Dai, Huazhong Yang, Yu Wang
In recent years, there has been significant progress in the development of text-to-image generative models.
no code implementations • 13 Mar 2024 • Arturs Backurs, Zinan Lin, Sepideh Mahabadi, Sandeep Silwal, Jakub Tarnawski
We abstract out this common subroutine and study the following fundamental algorithmic problem: Given a similarity function $f$ and a large high-dimensional private dataset $X \subset \mathbb{R}^d$, output a differentially private (DP) data structure which approximates $\sum_{x \in X} f(x, y)$ for any query $y$.
1 code implementation • 4 Mar 2024 • Chulin Xie, Zinan Lin, Arturs Backurs, Sivakanth Gopi, Da Yu, Huseyin A Inan, Harsha Nori, Haotian Jiang, Huishuai Zhang, Yin Tat Lee, Bo Li, Sergey Yekhanin
Lin et al. (2024) recently introduced the Private Evolution (PE) algorithm to generate DP synthetic images with only API access to diffusion models.
1 code implementation • 11 Dec 2023 • Ronghao Ni, Zinan Lin, Shuaiqi Wang, Giulia Fanti
By using MoLE existing linear-centric models can achieve SOTA LTSF results in 68% of the experiments that PatchTST reports and we compare to, whereas existing single-head linear-centric models achieve SOTA results in only 25% of cases.
Ranked #1 on Time Series Forecasting on Electricity (720)
no code implementations • 7 Dec 2023 • Mrugen Deshmukh, Zinan Lin, Hanqing Lou, Mahmoud Kamel, Rui Yang, Ismail Guvenc
Compressed beamforming algorithm is used in the current Wi-Fi standard to reduce the beamforming feedback overhead (BFO).
no code implementations • 7 Dec 2023 • Mrugen Deshmukh, Zinan Lin, Hanqing Lou, Mahmoud Kamel, Rui Yang, Ismail Guvenc
We explore several methods that consider different representations of the data in the candidate set.
1 code implementation • 21 Sep 2023 • Xinyu Tang, Richard Shin, Huseyin A. Inan, Andre Manoel, FatemehSadat Mireshghallah, Zinan Lin, Sivakanth Gopi, Janardhan Kulkarni, Robert Sim
Our results demonstrate that our algorithm can achieve competitive performance with strong privacy levels.
1 code implementation • 28 Jul 2023 • Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang
This work aims at decreasing the end-to-end generation latency of large language models (LLMs).
no code implementations • NeurIPS 2023 • Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly.
1 code implementation • 15 Jun 2023 • Enshu Liu, Xuefei Ning, Zinan Lin, Huazhong Yang, Yu Wang
Diffusion probabilistic models (DPMs) are a new class of generative models that have achieved state-of-the-art generation quality in various domains.
1 code implementation • 24 May 2023 • Zinan Lin, Sivakanth Gopi, Janardhan Kulkarni, Harsha Nori, Sergey Yekhanin
We further demonstrate the promise of applying PE on large foundation models such as Stable Diffusion to tackle challenging private datasets with a small number of high-resolution images.
1 code implementation • 23 May 2023 • Da Yu, Sivakanth Gopi, Janardhan Kulkarni, Zinan Lin, Saurabh Naik, Tomasz Lukasz Religa, Jian Yin, Huishuai Zhang
Besides performance improvements, our framework also shows that with careful pre-training and private fine-tuning, smaller models can match the performance of much larger models that do not have access to private data, highlighting the promise of private learning as a tool for model compression and efficiency.
1 code implementation • 21 Mar 2023 • Dugang Liu, Pengxiang Cheng, Zinan Lin, Xiaolian Zhang, Zhenhua Dong, Rui Zhang, Xiuqiang He, Weike Pan, Zhong Ming
To bridge this gap, we study the debiasing problem from a new perspective and propose to directly minimize the upper bound of an ideal objective function, which facilitates a better potential solution to the system-induced biases.
1 code implementation • 3 Mar 2023 • Zinan Lin, Shuaiqi Wang, Vyas Sekar, Giulia Fanti
We study a setting where a data holder wishes to share data with a receiver, without revealing certain summary statistics of the data distribution (e. g., mean, standard deviation).
no code implementations • 3 Jun 2022 • Zinan Lin, Vyas Sekar, Giulia Fanti
By drawing connections to the generalization properties of GANs, we prove that under some assumptions, GAN-generated samples inherently satisfy some (weak) privacy guarantees.
1 code implementation • 20 Mar 2022 • Zinan Lin, Hao Liang, Giulia Fanti, Vyas Sekar
We study the problem of learning generative adversarial networks (GANs) for a rare class of an unlabeled dataset subject to a labeling budget.
no code implementations • 9 Mar 2022 • Mrugen Deshmukh, Zinan Lin, Hanqing Lou, Mahmoud Kamel, Rui Yang, Ismail Guvenc
The IEEE 802. 11 standard based wireless local area networks (WLANs) or Wi-Fi networks are critical to provide internet access in today's world.
no code implementations • 22 Jan 2021 • Todd Huster, Jeremy E. J. Cohen, Zinan Lin, Kevin Chan, Charles Kamhoua, Nandi Leslie, Cho-Yu Jason Chiang, Vyas Sekar
A Pareto GAN leverages extreme value theory and the functional properties of neural networks to learn a distribution that matches the asymptotic behavior of the marginal distributions of the features.
1 code implementation • 13 Jan 2021 • Mircea Trofin, Yundi Qian, Eugene Brevdo, Zinan Lin, Krzysztof Choromanski, David Li
Leveraging machine-learning (ML) techniques for compiler optimizations has been widely studied and explored in academia.
1 code implementation • NeurIPS 2021 • Zinan Lin, Vyas Sekar, Giulia Fanti
Spectral normalization (SN) is a widely-used technique for improving the stability and sample quality of Generative Adversarial Networks (GANs).
4 code implementations • 30 Sep 2019 • Zinan Lin, Alankar Jain, Chen Wang, Giulia Fanti, Vyas Sekar
By shedding light on the promise and challenges, we hope our work can rekindle the conversation on workflows for data sharing.
1 code implementation • 14 Jun 2019 • Zinan Lin, Kiran Koshy Thekumparampil, Giulia Fanti, Sewoong Oh
Disentangled generative models map a latent code vector to a target space, while enforcing that a subset of the learned latent codes are interpretable and associated with distinct properties of the target distribution.
2 code implementations • NeurIPS 2018 • Kiran Koshy Thekumparampil, Ashish Khetan, Zinan Lin, Sewoong Oh
When the distribution of the noise is known, we introduce a novel architecture which we call Robust Conditional GAN (RCGAN).
1 code implementation • IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 2018 • Zinan Lin, Yongfeng Huang, Jilong Wang
Experiments show that on full embedding rate samples, RNN-SM is of high detection accuracy, which remains over 90% even when the sample is as short as 0. 1 s, and is significantly higher than other state-of-the-art methods.
7 code implementations • NeurIPS 2018 • Zinan Lin, Ashish Khetan, Giulia Fanti, Sewoong Oh
Generative adversarial networks (GANs) are innovative techniques for learning generative models of complex data distributions from samples.