Generalization Bounds
131 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Generalization Bounds
Latest papers with no code
Information Complexity of Stochastic Convex Optimization: Applications to Generalization and Memorization
In this work, we investigate the interplay between memorization and learning in the context of \emph{stochastic convex optimization} (SCO).
Active Few-Shot Fine-Tuning
We study the active few-shot fine-tuning of large neural networks to downstream tasks.
Generalizing across Temporal Domains with Koopman Operators
By employing Koopman Operators, we effectively address the time-evolving distributions encountered in TDG using the principles of Koopman theory, where measurement functions are sought to establish linear transition relations between evolving domains.
Generalization Bounds for Heavy-Tailed SDEs through the Fractional Fokker-Planck Equation
Understanding the generalization properties of heavy-tailed stochastic optimization algorithms has attracted increasing attention over the past years.
More Flexible PAC-Bayesian Meta-Learning by Learning Learning Algorithms
We introduce a new framework for studying meta-learning methods using PAC-Bayesian theory.
PAC-Bayesian Adversarially Robust Generalization Bounds for Graph Neural Network
As corollaries, we derive better PAC-Bayesian robust generalization bounds for GCN in the standard setting, which improve the bounds in (Liao et al., 2020) by avoiding exponential dependence on the maximum node degree.
Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features
Unveiling the reasons behind the exceptional success of transformers requires a better understanding of why attention layers are suitable for NLP tasks.
Data-Dependent Stability Analysis of Adversarial Training
Stability analysis is an essential aspect of studying the generalization ability of deep learning, as it involves deriving generalization bounds for stochastic gradient descent-based training algorithms.
Class-wise Generalization Error: an Information-Theoretic Analysis
Existing generalization theories of supervised learning typically take a holistic approach and provide bounds for the expected generalization over the whole data distribution, which implicitly assumes that the model generalizes similarly for all the classes.
PAC-Bayesian Domain Adaptation Bounds for Multi-view learning
This paper presents a series of new results for domain adaptation in the multi-view learning setting.