Search Results for author: Konstantin Donhauser

Found 11 papers, 10 papers with code

Detecting critical treatment effect bias in small subgroups

1 code implementation29 Apr 2024 Piersilvio De Bartolomeis, Javier Abad, Konstantin Donhauser, Fanny Yang

Randomized trials are considered the gold standard for making informed decisions in medicine, yet they often lack generalizability to the patient populations in clinical practice.

Benchmarking Decision Making +1

Privacy-preserving data release leveraging optimal transport and particle gradient descent

1 code implementation31 Jan 2024 Konstantin Donhauser, Javier Abad, Neha Hulkund, Fanny Yang

We present a novel approach for differentially private data synthesis of protected tabular datasets, a relevant task in highly sensitive domains such as healthcare and government.

Privacy Preserving

Hidden yet quantifiable: A lower bound for confounding strength using randomized trials

2 code implementations6 Dec 2023 Piersilvio De Bartolomeis, Javier Abad, Konstantin Donhauser, Fanny Yang

Further, we show how our lower bound can correctly identify the absence and presence of unobserved confounding in a real-world setting.

valid

Strong inductive biases provably prevent harmless interpolation

1 code implementation18 Jan 2023 Michael Aerni, Marco Milanta, Konstantin Donhauser, Fanny Yang

Classical wisdom suggests that estimators should avoid fitting noise to achieve good generalization.

Inductive Bias

Tight bounds for maximum $\ell_1$-margin classifiers

no code implementations7 Dec 2022 Stefan Stojanovic, Konstantin Donhauser, Fanny Yang

In particular, for the noiseless setting, we prove tight upper and lower bounds for the prediction error that match existing rates of order $\frac{\|w^*\|_1^{2/3}}{n^{1/3}}$ for general ground truths.

Fast Rates for Noisy Interpolation Require Rethinking the Effects of Inductive Bias

1 code implementation7 Mar 2022 Konstantin Donhauser, Nicolo Ruggeri, Stefan Stojanovic, Fanny Yang

Good generalization performance on high-dimensional data crucially hinges on a simple structure of the ground truth and a corresponding strong inductive bias of the estimator.

Inductive Bias valid

Tight bounds for minimum l1-norm interpolation of noisy data

1 code implementation10 Nov 2021 Guillaume Wang, Konstantin Donhauser, Fanny Yang

We provide matching upper and lower bounds of order $\sigma^2/\log(d/n)$ for the prediction error of the minimum $\ell_1$-norm interpolator, a. k. a.

Interpolation can hurt robust generalization even when there is no noise

2 code implementations NeurIPS 2021 Konstantin Donhauser, Alexandru Ţifrea, Michael Aerni, Reinhard Heckel, Fanny Yang

Numerous recent works show that overparameterization implicitly reduces variance for min-norm interpolators and max-margin classifiers.

regression

Maximizing the robust margin provably overfits on noiseless data

1 code implementation ICML Workshop AML 2021 Konstantin Donhauser, Alexandru Tifrea, Michael Aerni, Reinhard Heckel, Fanny Yang

Numerous recent works show that overparameterization implicitly reduces variance, suggesting vanishing benefits for explicit regularization in high dimensions.

Attribute

Cannot find the paper you are looking for? You can Submit a new open access paper.