Generalization Bounds
131 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Generalization Bounds
Latest papers
Implicit Graph Neural Diffusion Networks: Convergence, Generalization, and Over-Smoothing
We show how implicit GNN layers can be viewed as the fixed-point equation of a Dirichlet energy minimization problem and give conditions under which it may suffer from over-smoothing during training (OST) and inference (OSI).
Learning Expressive Priors for Generalization and Uncertainty Estimation in Neural Networks
In this work, we propose a novel prior learning method for advancing generalization and uncertainty estimation in deep neural networks.
How Does Information Bottleneck Help Deep Learning?
In this paper, we provide the first rigorous learning theory for justifying the benefit of information bottleneck in deep learning by mathematically relating information bottleneck to generalization errors.
Double-Weighting for Covariate Shift Adaptation
Supervised learning is often affected by a covariate shift in which the marginal distributions of instances (covariates $x$) of training and testing samples $\mathrm{p}_\text{tr}(x)$ and $\mathrm{p}_\text{te}(x)$ are different but the label conditionals coincide.
Towards Understanding Generalization of Macro-AUC in Multi-label Learning
We theoretically identify a critical factor of the dataset affecting the generalization bounds: \emph{the label-wise class imbalance}.
The Ideal Continual Learner: An Agent That Never Forgets
We show that ICL unifies multiple well-established continual learning methods and gives new theoretical insights into the strengths and weaknesses of these methods.
AdapterGNN: Parameter-Efficient Fine-Tuning Improves Generalization in GNNs
AdapterGNN preserves the knowledge of the large pre-trained model and leverages highly expressive adapters for GNNs, which can adapt to downstream tasks effectively with only a few parameters, while also improving the model's generalization ability.
Energy-guided Entropic Neural Optimal Transport
Energy-based models (EBMs) are known in the Machine Learning community for decades.
Algorithm-Dependent Bounds for Representation Learning of Multi-Source Domain Adaptation
We further provide algorithm-dependent generalization bounds for these two settings, where the generalization is characterized by the mutual information between the parameters and the data.
Transformed Low-Rank Parameterization Can Help Robust Generalization for Tensor Neural Networks
Our analysis indicates that the transformed low-rank parameterization can promisingly enhance robust generalization for t-NNs.