Generalization Bounds
131 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Generalization Bounds
Latest papers
Stability and Generalization in Free Adversarial Training
In this work, we study the generalization performance of adversarial training methods using the algorithmic stability framework.
A PAC-Bayesian Framework for Optimal Control with Stability Guarantees
Based on these bounds, we propose a new method for designing optimal controllers, offering a principled way to incorporate prior knowledge into the synthesis process, which aids in improving the control policy and mitigating overfitting.
Do Generated Data Always Help Contrastive Learning?
Contrastive Learning (CL) has emerged as one of the most successful paradigms for unsupervised visual representation learning, yet it often depends on intensive manual data augmentations.
Leveraging PAC-Bayes Theory and Gibbs Distributions for Generalization Bounds with Complexity Measures
In statistical learning theory, a generalization bound usually involves a complexity measure imposed by the considered theoretical framework.
Tighter Generalization Bounds on Digital Computers via Discrete Optimal Transport
Notably, $c_{m}\in \mathcal{O}(\sqrt{m})$ for learning models on discretized Euclidean domains.
Minimum Description Length and Generalization Guarantees for Representation Learning
Rather than the mutual information between the encoder's input and the representation, which is often believed to reflect the algorithm's generalization capability in the related literature but in fact, falls short of doing so, our new bounds involve the "multi-letter" relative entropy between the distribution of the representations (or labels) of the training and test sets and a fixed prior.
Sequence Length Independent Norm-Based Generalization Bounds for Transformers
This paper provides norm-based generalization bounds for the Transformer architecture that do not depend on the input sequence length.
Comparing Comparators in Generalization Bounds
We derive generic information-theoretic and PAC-Bayesian generalization bounds involving an arbitrary convex comparator function, which measures the discrepancy between the training and population loss.
A path-norm toolkit for modern networks: consequences, promises and challenges
The versatility of the toolkit and its ease of implementation allow us to challenge the concrete promises of path-norm-based generalization bounds, by numerically evaluating the sharpest known bounds for ResNets on ImageNet.
Learning to Warm-Start Fixed-Point Optimization Algorithms
We introduce a machine-learning framework to warm-start fixed-point optimization algorithms.