no code implementations • NeurIPS 2009 • Pascal Germain, Alexandre Lacasse, Mario Marchand, Sara Shanian, François Laviolette
We show that standard ell_p-regularized objective functions currently used, such as ridge regression and ell_p-regularized boosting, are obtained from a relaxation of the KL divergence between the quasi uniform posterior and the uniform prior.