Generalization Bounds
131 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Generalization Bounds
Most implemented papers
Learning to Warm-Start Fixed-Point Optimization Algorithms
We introduce a machine-learning framework to warm-start fixed-point optimization algorithms.
Generalization Bounds for Learning with Linear, Polygonal, Quadratic and Conic Side Knowledge
In this paper, we consider a supervised learning setting where side knowledge is provided about the labels of unlabeled examples.
Towards a Learning Theory of Cause-Effect Inference
We pose causal inference as the problem of learning to classify probability distributions.
Deep Learning and the Information Bottleneck Principle
Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle.
Learning Kernels with Random Features
We extend the randomized-feature approach to the task of learning a kernel (via its associated random features).
Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations
The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals.
Model-Powered Conditional Independence Test
We consider the problem of non-parametric Conditional Independence testing (CI testing) for continuous random variables.
A PAC-Bayesian Analysis of Randomized Learning with Application to Stochastic Gradient Descent
This inspires an adaptive sampling algorithm for SGD that optimizes the posterior at runtime.
Regularization via Mass Transportation
The goal of regression and classification methods in supervised learning is to minimize the empirical risk, that is, the expectation of some loss function quantifying the prediction error under the empirical distribution.
Estimating the Success of Unsupervised Image to Image Translation
While in supervised learning, the validation error is an unbiased estimator of the generalization (test) error and complexity-based generalization bounds are abundant, no such bounds exist for learning a mapping in an unsupervised way.