no code implementations • 8 Apr 2024 • Ruiqi Zhang, Licong Lin, Yu Bai, Song Mei
LLM unlearning aims to eliminate the influence of undesirable data from the pre-trained model while preserving the model's utilities on other tasks.
no code implementations • 14 Nov 2023 • Michael Celentano, Zhou Fan, Licong Lin, Song Mei
In settings where it is conjectured that no efficient algorithm can find this local neighborhood, we prove analogous geometric properties for a local minimizer of the TAP free energy reachable by AMP, and show that posterior inference based on this minimizer remains correctly calibrated.
no code implementations • 12 Oct 2023 • Licong Lin, Yu Bai, Song Mei
This provides the first quantitative analysis of the ICRL capabilities of transformers pretrained from offline trajectories.
1 code implementation • NeurIPS 2023 • Licong Lin, Mufang Ying, Suvrojit Ghosh, Koulik Khamaru, Cun-Hui Zhang
Even in linear models, the Ordinary Least Squares (OLS) estimator may fail to exhibit asymptotic normality for single coordinate estimation and have inflated error.
no code implementations • 30 May 2023 • Licong Lin, Tijana Zrnic
A complementary family of solutions makes use of explicit \emph{models} for the feedback, such as best-response models in strategic classification, enabling significantly faster rates.
no code implementations • 5 Mar 2023 • Licong Lin, Koulik Khamaru, Martin J. Wainwright
Many standard estimators, when applied to adaptively collected data, fail to be asymptotically normal, thereby complicating the construction of confidence intervals.
1 code implementation • 4 Nov 2022 • Taejoo Ahn, Licong Lin, Song Mei
In this paper, we develop near-optimal multiple testing procedures for high dimensional Bayesian linear models with isotropic covariates.
1 code implementation • 11 Oct 2020 • Licong Lin, Edgar Dobriban
This leads to discovering the unimodality of variance as a function of the level of parametrization, and to decomposing the variance into that arising from label noise, initialization, and randomness in the training data to understand the sources of the error.
1 code implementation • 20 Dec 2019 • Yuzheng Hu, Licong Lin, Shange Tang
To the best of our knowledge, this is the first paper that seriously considers the necessity of square root among all adaptive methods.