Generalization Bounds

131 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Latest papers with no code

Information Complexity of Stochastic Convex Optimization: Applications to Generalization and Memorization

no code yet • 14 Feb 2024

In this work, we investigate the interplay between memorization and learning in the context of \emph{stochastic convex optimization} (SCO).

Active Few-Shot Fine-Tuning

no code yet • 13 Feb 2024

We study the active few-shot fine-tuning of large neural networks to downstream tasks.

Generalizing across Temporal Domains with Koopman Operators

no code yet • 12 Feb 2024

By employing Koopman Operators, we effectively address the time-evolving distributions encountered in TDG using the principles of Koopman theory, where measurement functions are sought to establish linear transition relations between evolving domains.

Generalization Bounds for Heavy-Tailed SDEs through the Fractional Fokker-Planck Equation

no code yet • 12 Feb 2024

Understanding the generalization properties of heavy-tailed stochastic optimization algorithms has attracted increasing attention over the past years.

More Flexible PAC-Bayesian Meta-Learning by Learning Learning Algorithms

no code yet • 6 Feb 2024

We introduce a new framework for studying meta-learning methods using PAC-Bayesian theory.

PAC-Bayesian Adversarially Robust Generalization Bounds for Graph Neural Network

no code yet • 6 Feb 2024

As corollaries, we derive better PAC-Bayesian robust generalization bounds for GCN in the standard setting, which improve the bounds in (Liao et al., 2020) by avoiding exponential dependence on the maximum node degree.

Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features

no code yet • 5 Feb 2024

Unveiling the reasons behind the exceptional success of transformers requires a better understanding of why attention layers are suitable for NLP tasks.

Data-Dependent Stability Analysis of Adversarial Training

no code yet • 6 Jan 2024

Stability analysis is an essential aspect of studying the generalization ability of deep learning, as it involves deriving generalization bounds for stochastic gradient descent-based training algorithms.

Class-wise Generalization Error: an Information-Theoretic Analysis

no code yet • 5 Jan 2024

Existing generalization theories of supervised learning typically take a holistic approach and provide bounds for the expected generalization over the whole data distribution, which implicitly assumes that the model generalizes similarly for all the classes.

PAC-Bayesian Domain Adaptation Bounds for Multi-view learning

no code yet • 2 Jan 2024

This paper presents a series of new results for domain adaptation in the multi-view learning setting.