Generalization Bounds
131 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Generalization Bounds
Most implemented papers
Generalization Guarantees for Imitation Learning
Control policies from imitation learning can often fail to generalize to novel environments due to imperfect demonstrations or the inability of imitation learning algorithms to accurately infer the expert's policies.
Minimax Classification with 0-1 Loss and Performance Guarantees
We also present MRCs' finite-sample generalization bounds in terms of training size and smallest minimax risk, and show their competitive classification performance w. r. t.
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning
Although pretrained language models can be fine-tuned to produce state-of-the-art results for a very wide range of language understanding tasks, the dynamics of this process are not well understood, especially in the low data regime.
Generalization Bounds for Sparse Random Feature Expansions
In particular, we provide generalization bounds for functions in a certain class (that is dense in a reproducing kernel Hilbert space) depending on the number of samples and the distribution of features.
Robust Generalization despite Distribution Shift via Minimum Discriminating Information
Training models that perform well under distribution shifts is a central challenge in machine learning.
Personalized Federated Learning through Local Memorization
Federated learning allows clients to collaboratively learn statistical models while keeping their data local.
Fast Interpretable Greedy-Tree Sums
In such settings, practitioners often use highly interpretable decision tree models, but these suffer from inductive bias against additive structure.
NICO++: Towards Better Benchmarking for Domain Generalization
Most current evaluation methods for domain generalization (DG) adopt the leave-one-out strategy as a compromise on the limited number of domains.
Transformers as Algorithms: Generalization and Stability in In-context Learning
We first explore the statistical aspects of this abstraction through the lens of multitask learning: We obtain generalization bounds for ICL when the input prompt is (1) a sequence of i. i. d.
Generalization in Graph Neural Networks: Improved PAC-Bayesian Bounds on Graph Diffusion
Graph neural networks are widely used tools for graph prediction tasks.