Search Results for author: Francis R. Bach

Found 19 papers, 0 papers with code

A Stochastic Gradient Method with an Exponential Convergence _Rate for Finite Training Sets

no code implementations NeurIPS 2012 Nicolas L. Roux, Mark Schmidt, Francis R. Bach

We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex.

BIG-bench Machine Learning

Multiple Operator-valued Kernel Learning

no code implementations NeurIPS 2012 Hachem Kadri, Alain Rakotomamonjy, Philippe Preux, Francis R. Bach

We study this problem in the case of kernel ridge regression for functional responses with an lr-norm constraint on the combination coefficients.

regression

Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning

no code implementations NeurIPS 2011 Eric Moulines, Francis R. Bach

We consider the minimization of a convex objective function defined on a Hilbert space, which is only available through unbiased estimates of its gradients.

BIG-bench Machine Learning regression

Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization

no code implementations NeurIPS 2011 Mark Schmidt, Nicolas L. Roux, Francis R. Bach

We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the second term.

Trace Lasso: a trace norm regularization for correlated designs

no code implementations NeurIPS 2011 Edouard Grave, Guillaume R. Obozinski, Francis R. Bach

This norm, called the trace Lasso, uses the trace norm of the selected covariates, which is a convex surrogate of their rank, as the criterion of model complexity.

Structured sparsity-inducing norms through submodular functions

no code implementations NeurIPS 2010 Francis R. Bach

Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i. e., with small cardinality of their supports.

Network Flow Algorithms for Structured Sparsity

no code implementations NeurIPS 2010 Julien Mairal, Rodolphe Jenatton, Francis R. Bach, Guillaume R. Obozinski

Our algorithm scales up to millions of groups and variables, and opens up a whole new range of applications for structured sparse models.

Efficient Optimization for Discriminative Latent Class Models

no code implementations NeurIPS 2010 Armand Joulin, Jean Ponce, Francis R. Bach

To avoid this problem, we introduce a local approximation of this cost function, which leads to a quadratic non-convex optimization problem over a product of simplices.

Clustering Document Classification +2

Asymptotically Optimal Regularization in Smooth Parametric Models

no code implementations NeurIPS 2009 Percy S. Liang, Guillaume Bouchard, Francis R. Bach, Michael. I. Jordan

Many types of regularization schemes have been employed in statistical learning, each one motivated by some assumption about the problem domain.

Multi-Task Learning

Data-driven calibration of linear estimators with minimal penalties

no code implementations NeurIPS 2009 Sylvain Arlot, Francis R. Bach

This paper tackles the problem of selecting among several linear estimators in non-parametric regression; this includes model selection for linear regression, the choice of a regularization parameter in kernel ridge regression or spline smoothing, and the choice of a kernel in multiple kernel learning.

Model Selection regression

Kernel Change-point Analysis

no code implementations NeurIPS 2008 Zaïd Harchaoui, Eric Moulines, Francis R. Bach

Change-point analysis of an (unlabelled) sample of observations consists in, first, testing whether a change in the distribution occurs within the sample, and second, if a change occurs, estimating the change-point instant after which the distribution of the observations switches from one distribution to another different distribution.

Two-sample testing

Clustered Multi-Task Learning: A Convex Formulation

no code implementations NeurIPS 2008 Laurent Jacob, Jean-Philippe Vert, Francis R. Bach

In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others.

Multi-Task Learning

Supervised Dictionary Learning

no code implementations NeurIPS 2008 Julien Mairal, Jean Ponce, Guillermo Sapiro, Andrew Zisserman, Francis R. Bach

It is now well established that sparse signal models are well suited to restoration tasks and can effectively be learned from audio, image, and video data.

Dictionary Learning General Classification +1

Sparse probabilistic projections

no code implementations NeurIPS 2008 Cédric Archambeau, Francis R. Bach

We present a generative model for performing sparse probabilistic projections, which includes sparse principal component analysis and sparse canonical correlation analysis as special cases.

Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning

no code implementations NeurIPS 2008 Francis R. Bach

For supervised and unsupervised learning, positive definite kernels allow to use large and potentially infinite dimensional feature spaces with a computational cost that only depends on the number of observations.

Variable Selection

DIFFRAC: a discriminative and flexible framework for clustering

no code implementations NeurIPS 2007 Francis R. Bach, Zaïd Harchaoui

We present a novel linear clustering framework (Diffrac) which relies on a linear discriminative cost function and a convex relaxation of a combinatorial optimization problem.

Clustering Combinatorial Optimization +1

Testing for Homogeneity with Kernel Fisher Discriminant Analysis

no code implementations NeurIPS 2007 Moulines Eric, Francis R. Bach, Zaïd Harchaoui

This provides us with a consistent nonparametric test statistic, for which we derive the asymptotic distribution under the null hypothesis.

Cannot find the paper you are looking for? You can Submit a new open access paper.