Generalization Bounds

131 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Learning to Warm-Start Fixed-Point Optimization Algorithms

stellatogrp/l2ws_fixed_point 14 Sep 2023

We introduce a machine-learning framework to warm-start fixed-point optimization algorithms.

Generalization Bounds for Learning with Linear, Polygonal, Quadratic and Conic Side Knowledge

thejat/supervised_learning_with_side_knowledge 30 May 2014

In this paper, we consider a supervised learning setting where side knowledge is provided about the labels of unlabeled examples.

Towards a Learning Theory of Cause-Effect Inference

lopezpaz/causation_learning_theory 9 Feb 2015

We pose causal inference as the problem of learning to classify probability distributions.

Deep Learning and the Information Bottleneck Principle

amrutn/Information-in-Language 9 Mar 2015

Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle.

Learning Kernels with Random Features

Kaslanarian/LKRF NeurIPS 2016

We extend the randomized-feature approach to the task of learning a kernel (via its associated random features).

Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations

albietz/ckn_kernel 9 Jun 2017

The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals.

Model-Powered Conditional Independence Test

rajatsen91/CCIT NeurIPS 2017

We consider the problem of non-parametric Conditional Independence testing (CI testing) for continuous random variables.

A PAC-Bayesian Analysis of Randomized Learning with Application to Stochastic Gradient Descent

Zymrael/PAC-Adasampling NeurIPS 2017

This inspires an adaptive sampling algorithm for SGD that optimizes the posterior at runtime.

Regularization via Mass Transportation

sorooshafiee/Regularization-via-Transportation 27 Oct 2017

The goal of regression and classification methods in supervised learning is to minimize the empirical risk, that is, the expectation of some loss function quantifying the prediction error under the empirical distribution.

Estimating the Success of Unsupervised Image to Image Translation

sagiebenaim/gan_bound ECCV 2018

While in supervised learning, the validation error is an unbiased estimator of the generalization (test) error and complexity-based generalization bounds are abundant, no such bounds exist for learning a mapping in an unsupervised way.