no code implementations • 28 Feb 2024 • Danny Halawi, Fred Zhang, Chen Yueh-Han, Jacob Steinhardt
In this work, we study whether language models (LMs) can forecast at the level of competitive human forecasters.
no code implementations • 17 Jan 2024 • Zhou Lu, Qiuyi Zhang, Xinyi Chen, Fred Zhang, David Woodruff, Elad Hazan
In this paper, we give query and regret optimal bandit algorithms under the strict notion of strongly adaptive regret, which measures the maximum regret over any contiguous interval $I$.
no code implementations • 27 Sep 2023 • Fred Zhang, Neel Nanda
Mechanistic interpretability seeks to understand the internal mechanisms of machine learning models, where localization -- identifying the important model components -- is a key step.
no code implementations • 3 Mar 2023 • David P. Woodruff, Fred Zhang, Samson Zhou
In the online learning with experts problem, an algorithm must make a prediction about an outcome on each of $T$ days (or times), given a set of $n$ experts who make predictions on each day (or time).
no code implementations • 15 Dec 2022 • Daniel Alabi, Pravesh K. Kothari, Pranay Tankala, Prayaag Venkat, Fred Zhang
We prove a new lower bound on differentially private covariance estimation to show that the dependence on the condition number $\kappa$ in the above sample bound is also tight.
no code implementations • 30 Sep 2022 • David P. Woodruff, Fred Zhang, Qiuyi Zhang
Specifically, for any $m$ matrices $A_1,..., A_m$ with consecutive differences bounded in Schatten-$1$ norm by $\alpha$, we provide a novel binary tree summation procedure that simultaneously estimates all $m$ traces up to $\epsilon$ error with $\delta$ failure probability with an optimal query complexity of $\widetilde{O}\left(m \alpha\sqrt{\log(1/\delta)}/\epsilon + m\log(1/\delta)\right)$, improving the dependence on both $\alpha$ and $\delta$ from Dharangutte and Musco (NeurIPS, 2021).
no code implementations • 16 Jul 2022 • Binghui Peng, Fred Zhang
We provide the first sub-linear space and sub-linear regret algorithm for online learning with expert advice (against an oblivious adversary), addressing an open question raised recently by Srinivas, Woodruff, Xu and Zhou (STOC 2022).
no code implementations • NeurIPS 2020 • Alexander Wei, Fred Zhang
They provide robustness-consistency trade-offs for a variety of online problems.
no code implementations • NeurIPS 2020 • Samuel B. Hopkins, Jerry Li, Fred Zhang
In this paper, we provide a meta-problem and a duality theorem that lead to a new unified view on robust and heavy-tailed mean estimation in high dimensions.
no code implementations • 13 Aug 2019 • Zhixian Lei, Kyle Luh, Prayaag Venkat, Fred Zhang
The goal is to design an efficient estimator that attains the optimal sub-gaussian error bound, only assuming that the random vector has bounded mean and covariance.
1 code implementation • NeurIPS 2019 • Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L. Edelman, Fred Zhang, Boaz Barak
We perform an experimental study of the dynamics of Stochastic Gradient Descent (SGD) in learning deep neural networks for several real and synthetic classification tasks.