1 code implementation • 11 Mar 2024 • Ziliang Samuel Zhong, Xiang Pan, Qi Lei
Under our framework, we design and analyze a learning procedure consisting of learning approximately shared feature representation from source tasks and fine-tuning it on the target task.
no code implementations • 5 Mar 2024 • Hoang Phan, Andrew Gordon Wilson, Qi Lei
Models trained on data composed of different groups or domains can suffer from severe performance degradation under distribution shifts.
no code implementations • 13 Feb 2024 • Sheng Liu, Zihan Wang, Qi Lei
In this work, we propose a strong reconstruction attack in the setting of federated learning.
no code implementations • 28 Jan 2024 • Hong Jun Jeon, Jason D. Lee, Qi Lei, Benjamin Van Roy
Previous theoretical results pertaining to meta-learning on sequences build on contrived assumptions and are somewhat convoluted.
no code implementations • 10 Dec 2023 • Jianwei Li, Sheng Liu, Qi Lei
Language models trained via federated learning (FL) demonstrate impressive capabilities in handling complex tasks while protecting user privacy.
no code implementations • 19 Oct 2023 • Jianwei Li, Qi Lei, Wei Cheng, Dongkuan Xu
The pruning objective has recently extended beyond accuracy and sparsity to robustness in language models.
no code implementations • 19 Oct 2023 • Jianwei Li, Weizhi Gao, Qi Lei, Dongkuan Xu
It is widely acknowledged that large and sparse models have higher accuracy than small and dense models under the same model size constraints.
no code implementations • NeurIPS 2023 • Qian Yu, Yining Wang, Baihe Huang, Qi Lei, Jason D. Lee
We consider a fundamental setting in which the objective function is quadratic, and provide the first tight characterization of the optimal Hessian-dependent sample complexity.
no code implementations • 7 Dec 2022 • Zihan Wang, Jason D. Lee, Qi Lei
Understanding when and how much a model gradient leaks information about the training sample is an important question in privacy.
no code implementations • 25 Oct 2022 • Tianci Liu, Tong Yang, Quan Zhang, Qi Lei
Incorporating a deep generative model as the prior distribution in inverse problems has established substantial success in reconstructing images from corrupted observations.
no code implementations • 28 Sep 2022 • Chun-Yin Huang, Qi Lei, Xiaoxiao Li
Existing data assessment methods commonly require knowing the labels in advance, which are not feasible to achieve our goal of 'knowing which data to label.'
no code implementations • 29 Mar 2022 • Jiaqi Yang, Qi Lei, Jason D. Lee, Simon S. Du
We give novel algorithms for multi-task and lifelong linear bandits with shared representation.
no code implementations • 24 Feb 2022 • Shuo Yang, Yijun Dong, Rachel Ward, Inderjit S. Dhillon, Sujay Sanghavi, Qi Lei
Data augmentation is popular in the training of large neural networks; currently, however, there is no clear theoretical comparison between different algorithmic choices on how to use augmented data.
no code implementations • 22 Jan 2022 • XiangYu Song, JianXin Li, Qi Lei, Wei Zhao, Yunliang Chen, Ajmal Mian
The goal of Knowledge Tracing (KT) is to estimate how well students have mastered a concept based on their historical learning of related exercises.
no code implementations • NeurIPS Workshop Deep_Invers 2021 • Tianci Liu, Quan Zhang, Qi Lei
Automated hyper-parameter tuning for unsupervised learning, including inverse problems, remains a long-standing open problem due to the lack of validation data.
no code implementations • 18 Oct 2021 • Kurtland Chua, Qi Lei, Jason D. Lee
To address this gap, we analyze HRL in the meta-RL setting, where a learner learns latent hierarchical structure during meta-training for use in a downstream task.
no code implementations • 29 Sep 2021 • Shuo Yang, Yijun Dong, Rachel Ward, Inderjit S Dhillon, Sujay Sanghavi, Qi Lei
Data augmentation is popular in the training of large neural networks; currently, however, there is no clear theoretical comparison between different algorithmic choices on how to use augmented data.
no code implementations • NeurIPS 2021 • Baihe Huang, Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang
While the theory of RL has traditionally focused on linear function approximation (or eluder dimension) approaches, little is known about nonlinear RL with neural net approximations of the Q functions.
no code implementations • NeurIPS 2021 • Baihe Huang, Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang
This work considers a large family of bandit problems where the unknown underlying reward function is non-concave, including the low-rank generalized linear bandit problems and two-layer neural network with polynomial activation bandit problem.
no code implementations • 6 Jul 2021 • Kaixuan Huang, Sham M. Kakade, Jason D. Lee, Qi Lei
Eluder dimension and information gain are two widely used methods of complexity measures in bandit and reinforcement learning.
no code implementations • 23 Jun 2021 • Qi Lei, Wei Hu, Jason D. Lee
Transfer learning is essential when sufficient data comes from the source domain, with scarce labeled data from the target domain.
no code implementations • NeurIPS 2021 • Kurtland Chua, Qi Lei, Jason D. Lee
Representation learning has been widely studied in the context of meta-learning, enabling rapid learning of new tasks through shared representations.
no code implementations • 22 Feb 2021 • Tianle Cai, Ruiqi Gao, Jason D. Lee, Qi Lei
In this work, we propose a provably effective framework for domain adaptation based on label propagation.
no code implementations • 23 Oct 2020 • Jay Whang, Qi Lei, Alex Dimakis
We study image inverse problems with invertible generative priors, specifically normalizing flow models.
no code implementations • NeurIPS 2020 • Xiao Wang, Qi Lei, Ioannis Panageas
Sampling is a fundamental and arguably very important task with numerous applications in Machine Learning.
no code implementations • NeurIPS 2021 • Jason D. Lee, Qi Lei, Nikunj Saunshi, Jiacheng Zhuo
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks) without requiring labeled data to learn useful semantic representations.
no code implementations • 23 Mar 2020 • Lemeng Wu, Mao Ye, Qi Lei, Jason D. Lee, Qiang Liu
Recently, Liu et al.[19] proposed a splitting steepest descent (S2D) method that jointly optimizes the neural parameters and architectures based on progressively growing network structures by splitting neurons into multiple copies in a steepest descent fashion.
no code implementations • 18 Mar 2020 • Jay Whang, Qi Lei, Alexandros G. Dimakis
We study image inverse problems with a normalizing flow prior.
no code implementations • ICLR 2021 • Simon S. Du, Wei Hu, Sham M. Kakade, Jason D. Lee, Qi Lei
First, we study the setting where this common representation is low-dimensional and provide a fast rate of $O\left(\frac{\mathcal{C}\left(\Phi\right)}{n_1T} + \frac{k}{n_2}\right)$; here, $\Phi$ is the representation function class, $\mathcal{C}\left(\Phi\right)$ is its complexity measure, and $k$ is the dimension of the representation.
no code implementations • 17 Feb 2020 • Minhao Cheng, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, Cho-Jui Hsieh
Adversarial training has become one of the most effective methods for improving robustness of neural networks.
no code implementations • 17 Feb 2020 • Qi Lei, Sai Ganesh Nagarajan, Ioannis Panageas, Xiao Wang
In a recent series of papers it has been established that variants of Gradient Descent/Ascent and Mirror Descent exhibit last iterate convergence in convex-concave zero-sum games.
1 code implementation • NeurIPS 2019 • Qi Lei, Jiacheng Zhuo, Constantine Caramanis, Inderjit S. Dhillon, Alexandros G. Dimakis
We propose a generalized variant of Frank-Wolfe algorithm for solving a class of sparse/low-rank optimization problems.
no code implementations • 17 Oct 2019 • Jiacheng Zhuo, Qi Lei, Alexandros G. Dimakis, Constantine Caramanis
Large-scale machine learning training suffers from two prior challenges, specifically for nuclear-norm constrained problems with distributed systems: the synchronization slowdown due to the straggling workers, and high communication costs.
no code implementations • ICML 2020 • Qi Lei, Jason D. Lee, Alexandros G. Dimakis, Constantinos Daskalakis
Generative adversarial networks (GANs) are a widely used framework for learning generative models.
1 code implementation • NeurIPS 2019 • Qi Lei, Ajil Jalal, Inderjit S. Dhillon, Alexandros G. Dimakis
For generative models of arbitrary depth, we show that exact recovery is possible in polynomial time with high probability, if the layers are expanding and the weights are randomly selected.
1 code implementation • 6 Jun 2019 • Qi Lei, Jiacheng Zhuo, Constantine Caramanis, Inderjit S. Dhillon, Alexandros G. Dimakis
We propose a variant of the Frank-Wolfe algorithm for solving a class of sparse/low-rank optimization problems.
1 code implementation • 1 Dec 2018 • Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, Michael Witbrock
In this paper we formulate the attacks with discrete input on a set function as an optimization task.
1 code implementation • 14 Sep 2018 • Lingfei Wu, Ian En-Hsu Yen, Jin-Feng Yi, Fangli Xu, Qi Lei, Michael Witbrock
The proposed kernel does not suffer from the issue of diagonal dominance while naturally enjoys a \emph{Random Features} (RF) approximation, which reduces the computational complexity of existing DTW-based techniques from quadratic to linear in terms of both the number and the length of time-series.
1 code implementation • ICML 2018 • Jiong Zhang, Qi Lei, Inderjit S. Dhillon
Theoretically, we demonstrate that our parameterization does not lose any expressive power, and show how it controls generalization of RNN for the classification task.
6 code implementations • NeurIPS 2018 • Zhewei Yao, Amir Gholami, Qi Lei, Kurt Keutzer, Michael W. Mahoney
Extensive experiments on multiple networks show that saddle-points are not the cause for generalization gap of large batch size training, and the results consistently show that large batch converges to points with noticeably higher Hessian spectrum.
no code implementations • ICML 2017 • Rashish Tandon, Qi Lei, Alexandros G. Dimakis, Nikos Karampatziakis
We propose a novel coding theoretic framework for mitigating stragglers in distributed learning.
no code implementations • ICML 2017 • Qi Lei, Ian En-Hsu Yen, Chao-yuan Wu, Inderjit S. Dhillon, Pradeep Ravikumar
We consider the popular problem of sparse empirical risk minimization with linear predictors and a large number of both features and observations.
no code implementations • 21 Feb 2017 • Jinfeng Yi, Qi Lei, Wesley Gifford, Ji Liu, Junchi Yan
In order to efficiently solve the proposed framework, we propose a parameter-free and scalable optimization algorithm by effectively exploring the sparse and low-rank structure of the tensor.
no code implementations • 12 Feb 2017 • Qi Lei, Jin-Feng Yi, Roman Vaculin, Lingfei Wu, Inderjit S. Dhillon
A considerable amount of clustering algorithms take instance-feature matrices as their inputs.
2 code implementations • 10 Dec 2016 • Rashish Tandon, Qi Lei, Alexandros G. Dimakis, Nikos Karampatziakis
We propose a novel coding theoretic framework for mitigating stragglers in distributed learning.
no code implementations • NeurIPS 2016 • Qi Lei, Kai Zhong, Inderjit S. Dhillon
The vanilla power method simultaneously updates all the coordinates of the iterate, which is essential for its convergence analysis.
no code implementations • NeurIPS 2017 • Hsiang-Fu Yu, Cho-Jui Hsieh, Qi Lei, Inderjit S. Dhillon
Maximum Inner Product Search (MIPS) is an important task in many machine learning applications such as the prediction phase of a low-rank matrix factorization model for a recommender system.
no code implementations • 4 Sep 2015 • Arnaud Vandaele, Nicolas Gillis, Qi Lei, Kai Zhong, Inderjit Dhillon
Given a symmetric nonnegative matrix $A$, symmetric nonnegative matrix factorization (symNMF) is the problem of finding a nonnegative matrix $H$, usually with much fewer columns than $A$, such that $A \approx HH^T$.