Search Results for author: Paul Rolland

Found 12 papers, 3 papers with code

Double-Loop Unadjusted Langevin Algorithm

no code implementations ICML 2020 Paul Rolland, Armin Eftekhari, Ali Kavis, Volkan Cevher

A well-known first-order method for sampling from log-concave probability distributions is the Unadjusted Langevin Algorithm (ULA).

Identifiability and generalizability from multiple experts in Inverse Reinforcement Learning

1 code implementation22 Sep 2022 Paul Rolland, Luca Viano, Norman Schuerhoff, Boris Nikolov, Volkan Cevher

While Reinforcement Learning (RL) aims to train an agent from a reward function in a given environment, Inverse Reinforcement Learning (IRL) seeks to recover the reward function from observing an expert's behavior.

reinforcement-learning Reinforcement Learning (RL)

Score matching enables causal discovery of nonlinear additive noise models

no code implementations8 Mar 2022 Paul Rolland, Volkan Cevher, Matthäus Kleindessner, Chris Russel, Bernhard Schölkopf, Dominik Janzing, Francesco Locatello

This paper demonstrates how to recover causal graphs from the score of the data distribution in non-linear additive (Gaussian) noise models.

Causal Discovery

The Effect of the Intrinsic Dimension on the Generalization of Quadratic Classifiers

no code implementations NeurIPS 2021 Fabian Latorre, Leello Tadesse Dadi, Paul Rolland, Volkan Cevher

We demonstrate this by deriving an upper bound on the Rademacher Complexity that depends on two key quantities: (i) the intrinsic dimension, which is a measure of isotropy, and (ii) the largest eigenvalue of the second moment (covariance) matrix of the distribution.

Linear Convergence of SGD on Overparametrized Shallow Neural Networks

no code implementations29 Sep 2021 Paul Rolland, Ali Ramezani-Kebrya, ChaeHwan Song, Fabian Latorre, Volkan Cevher

Despite the non-convex landscape, first-order methods can be shown to reach global minima when training overparameterized neural networks, where the number of parameters far exceed the number of training data.

Efficient Proximal Mapping of the 1-path-norm of Shallow Networks

no code implementations2 Jul 2020 Fabian Latorre, Paul Rolland, Nadav Hallak, Volkan Cevher

We demonstrate two new important properties of the 1-path-norm of shallow neural networks.

Lipschitz constant estimation of Neural Networks via sparse polynomial optimization

no code implementations ICLR 2020 Fabian Latorre, Paul Rolland, Volkan Cevher

We introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bounds on the Lipschitz constant of neural networks.

Efficient learning of smooth probability functions from Bernoulli tests with guarantees

no code implementations11 Dec 2018 Paul Rolland, Ali Kavis, Alex Immer, Adish Singla, Volkan Cevher

We study the fundamental problem of learning an unknown, smooth probability function via pointwise Bernoulli tests.

Mirrored Langevin Dynamics

no code implementations NeurIPS 2018 Ya-Ping Hsieh, Ali Kavis, Paul Rolland, Volkan Cevher

We consider the problem of sampling from constrained distributions, which has posed significant challenges to both non-asymptotic analysis and algorithmic design.

High-Dimensional Bayesian Optimization via Additive Models with Overlapping Groups

1 code implementation20 Feb 2018 Paul Rolland, Jonathan Scarlett, Ilija Bogunovic, Volkan Cevher

In this paper, we consider the approach of Kandasamy et al. (2015), in which the high-dimensional function decomposes as a sum of lower-dimensional functions on subsets of the underlying variables.

Additive models Bayesian Optimization +2

Cannot find the paper you are looking for? You can Submit a new open access paper.