1 code implementation • 29 Apr 2024 • Piersilvio De Bartolomeis, Javier Abad, Konstantin Donhauser, Fanny Yang
Randomized trials are considered the gold standard for making informed decisions in medicine, yet they often lack generalizability to the patient populations in clinical practice.
1 code implementation • 31 Jan 2024 • Konstantin Donhauser, Javier Abad, Neha Hulkund, Fanny Yang
We present a novel approach for differentially private data synthesis of protected tabular datasets, a relevant task in highly sensitive domains such as healthcare and government.
2 code implementations • 6 Dec 2023 • Piersilvio De Bartolomeis, Javier Abad, Konstantin Donhauser, Fanny Yang
Further, we show how our lower bound can correctly identify the absence and presence of unobserved confounding in a real-world setting.
1 code implementation • 18 Jan 2023 • Michael Aerni, Marco Milanta, Konstantin Donhauser, Fanny Yang
Classical wisdom suggests that estimators should avoid fitting noise to achieve good generalization.
no code implementations • 7 Dec 2022 • Stefan Stojanovic, Konstantin Donhauser, Fanny Yang
In particular, for the noiseless setting, we prove tight upper and lower bounds for the prediction error that match existing rates of order $\frac{\|w^*\|_1^{2/3}}{n^{1/3}}$ for general ground truths.
1 code implementation • 7 Mar 2022 • Konstantin Donhauser, Nicolo Ruggeri, Stefan Stojanovic, Fanny Yang
Good generalization performance on high-dimensional data crucially hinges on a simple structure of the ground truth and a corresponding strong inductive bias of the estimator.
1 code implementation • 10 Nov 2021 • Guillaume Wang, Konstantin Donhauser, Fanny Yang
We provide matching upper and lower bounds of order $\sigma^2/\log(d/n)$ for the prediction error of the minimum $\ell_1$-norm interpolator, a. k. a.
2 code implementations • NeurIPS 2021 • Konstantin Donhauser, Alexandru Ţifrea, Michael Aerni, Reinhard Heckel, Fanny Yang
Numerous recent works show that overparameterization implicitly reduces variance for min-norm interpolators and max-margin classifiers.
1 code implementation • ICML Workshop AML 2021 • Konstantin Donhauser, Alexandru Tifrea, Michael Aerni, Reinhard Heckel, Fanny Yang
Numerous recent works show that overparameterization implicitly reduces variance, suggesting vanishing benefits for explicit regularization in high dimensions.
2 code implementations • 9 Apr 2021 • Konstantin Donhauser, Mingqi Wu, Fanny Yang
Kernel ridge regression is well-known to achieve minimax optimal rates in low-dimensional settings.
1 code implementation • 19 Mar 2019 • Thomas Ziegler, Manuel Fritsche, Lorenz Kuhn, Konstantin Donhauser
Dilated Convolutions have been shown to be highly useful for the task of image segmentation.