Generalization Bounds

131 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Bridging Theory and Algorithm for Domain Adaptation

thuml/MDD 11 Apr 2019

We introduce Margin Disparity Discrepancy, a novel measurement with rigorous generalization bounds, tailored to the distribution comparison with the asymmetric margin loss, and to the minimax optimization for easier training.

Estimating individual treatment effect: generalization bounds and algorithms

clinicalml/cfrnet ICML 2017

We give a novel, simple and intuitive generalization-error bound showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalization-error of that representation and the distance between the treated and control distributions induced by the representation.

SWAD: Domain Generalization by Seeking Flat Minima

khanrc/swad NeurIPS 2021

Domain generalization (DG) methods aim to achieve generalizability to an unseen target domain by using only training data from the source domains.

Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data

gkdziugaite/pacbayes-opt 31 Mar 2017

One of the defining properties of deep learning is that models are chosen to have many more parameters than available training data.

Optimal Auctions through Deep Learning: Advances in Differentiable Economics

saisrivatsan/deep-opt-auctions 12 Jun 2017

Designing an incentive compatible auction that maximizes expected revenue is an intricate task.

A Surprising Linear Relationship Predicts Test Performance in Deep Networks

brando90/Generalization-Puzzles-in-Deep-Networks 25 Jul 2018

Given two networks with the same training loss on a dataset, when would they have drastically different test losses and errors?

Robust Fine-Tuning of Deep Neural Networks with Hessian-based Generalization Guarantees

virtuosoresearch/robust-fine-tuning 6 Jun 2022

We study the generalization properties of fine-tuning to understand the problem of overfitting, which has often been observed (e. g., when the target dataset is small or when the training labels are noisy).

Deep multi-Wasserstein unsupervised domain adaptation

CtrlZ1/Domain-Adaptation-Algorithms Pattern Recognition Letters 2019

In unsupervised domain adaptation (DA), 1 aims at learning from labeled source data and fully unlabeled target examples a model with a low error on the target domain.

Learning Robust State Abstractions for Hidden-Parameter Block MDPs

facebookresearch/mtrl ICLR 2021

Further, we provide transfer and generalization bounds based on task and state similarity, along with sample complexity bounds that depend on the aggregate number of samples across tasks, rather than the number of tasks, a significant improvement over prior work that use the same environment assumptions.