1 code implementation • 1 Nov 2023 • Aaron David Schneider, Paul Mollière, Gilles Louppe, Ludmila Carone, Uffe Gråe Jørgensen, Leen Decin, Christiane Helling
For this study, we specifically examined the coupling between chemistry and radiation in GCMs and compared different methods for the mixing of opacities of different chemical species in the correlated-k assumption, when equilibrium chemistry cannot be assumed.
1 code implementation • NeurIPS 2023 • Maciej Falkiewicz, Naoya Takeishi, Imahn Shekhzadeh, Antoine Wehenkel, Arnaud Delaunoy, Gilles Louppe, Alexandros Kalousis
Bayesian inference allows expressing the uncertainty of posterior belief under a probabilistic model given prior information and the likelihood of the evidence.
1 code implementation • 4 Oct 2023 • Victor Mangeleer, Gilles Louppe
In climate simulations, small-scale processes shape ocean dynamics but remain computationally expensive to resolve directly.
1 code implementation • 3 Oct 2023 • François Rozet, Gilles Louppe
Data assimilation addresses the problem of identifying plausible state trajectories of dynamical systems given noisy or incomplete observations.
1 code implementation • 13 Sep 2023 • Sacha Lewin, Maxime Vandegar, Thomas Hoyoux, Olivier Barnich, Gilles Louppe
The long-standing problem of novel view synthesis has many applications, notably in sports broadcasting.
2 code implementations • NeurIPS 2023 • François Rozet, Gilles Louppe
Data assimilation, in its most comprehensive form, addresses the Bayesian inverse problem of identifying plausible state trajectories that explain noisy or incomplete observations of stochastic dynamical systems.
no code implementations • 11 May 2023 • Adrien Bolland, Gilles Louppe, Damien Ernst
First, we formulate direct policy optimization in the optimization by continuation framework.
1 code implementation • 21 Apr 2023 • Arnaud Delaunoy, Benjamin Kurt Miller, Patrick Forré, Christoph Weniger, Gilles Louppe
We show empirically that the balanced versions tend to produce conservative posterior approximations on a wide variety of benchmarks.
no code implementations • 18 Apr 2023 • Norman Marlier, Julien Gustin, Olivier Brüls, Gilles Louppe
Robotic grasping in highly noisy environments presents complex challenges, especially with limited prior knowledge about the scene.
no code implementations • 5 Apr 2023 • Namid R. Stillman, Silke Henkes, Roberto Mayor, Gilles Louppe
Moreover, we demonstrate that a small number (from one to three) snapshots of the system can be used for parameter inference and that this graph-informed approach outperforms typical metrics such as the average velocity or mean square displacement of the system.
no code implementations • 10 Mar 2023 • Norman Marlier, Olivier Brüls, Gilles Louppe
General robotic grippers are challenging to control because of their rich nonsmooth contact dynamics and the many sources of uncertainties due to the environment or sensor noise.
1 code implementation • 7 Dec 2022 • Renaud Vandeghen, Gilles Louppe, Marc Van Droogenbroeck
In this work, we introduce our method Adaptive Self-Training for Object Detection (ASTOD), which is a simple yet effective teacher-student method.
Ranked #11 on Semi-Supervised Object Detection on COCO 2% labeled data
1 code implementation • 29 Aug 2022 • Arnaud Delaunoy, Joeri Hermans, François Rozet, Antoine Wehenkel, Gilles Louppe
In this work, we introduce Balanced Neural Ratio Estimation (BNRE), a variation of the NRE algorithm designed to produce posterior approximations that tend to be more conservative, hence improving their reliability, while sharing the same Bayes optimal solution.
1 code implementation • 8 Feb 2022 • Antoine Wehenkel, Jens Behrmann, Hsiang Hsu, Guillermo Sapiro, Gilles Louppe, Jörn-Henrik Jacobsen
Hybrid modelling reduces the misspecification of expert models by combining them with machine learning (ML) components learned from data.
1 code implementation • 30 Dec 2021 • Arnaud Delaunoy, Gilles Louppe
Anchored ensembles approximate the posterior by training an ensemble of neural networks on anchored losses designed for the optima to follow the Bayesian posterior.
1 code implementation • NeurIPS 2021 • Antonio Sutera, Gilles Louppe, Van Anh Huynh-Thu, Louis Wehenkel, Pierre Geurts
Random forests have been widely used for their ability to provide so-called importance measures, which give insight at a global (per dataset) level on the relevance of input variables to predict a certain output.
4 code implementations • 13 Oct 2021 • Joeri Hermans, Arnaud Delaunoy, François Rozet, Antoine Wehenkel, Volodimir Begy, Gilles Louppe
We present extensive empirical evidence showing that current Bayesian simulation-based inference algorithms can produce computationally unfaithful posterior approximations.
1 code implementation • 1 Oct 2021 • François Rozet, Gilles Louppe
In many areas of science, complex phenomena are modeled by stochastic parametric simulators, often featuring high-dimensional parameter spaces and intractable likelihoods.
no code implementations • 29 Sep 2021 • Norman Marlier, Olivier Brüls, Gilles Louppe
Multi-fingered robotic grasping is an undeniable stepping stone to universal picking and dexterous manipulation.
2 code implementations • NeurIPS 2021 • Benjamin Kurt Miller, Alex Cole, Patrick Forré, Gilles Louppe, Christoph Weniger
Parametric stochastic simulators are ubiquitous in science, often featuring high-dimensional input parameters and/or an intractable likelihood.
no code implementations • ICML Workshop INNF 2021 • Antoine Wehenkel, Gilles Louppe
Among likelihood-based approaches for deep generative modelling, variational autoencoders (VAEs) offer scalable amortized posterior inference and fast sampling.
1 code implementation • 6 Jun 2021 • Thibaut Théate, Antoine Wehenkel, Adrien Bolland, Gilles Louppe, Damien Ernst
The results highlight the main strengths and weaknesses associated with each probability metric together with an important limitation of the Wasserstein distance.
Distributional Reinforcement Learning reinforcement-learning +2
1 code implementation • NeurIPS 2021 • Pedro L. C. Rodrigues, Thomas Moreau, Gilles Louppe, Alexandre Gramfort
Inferring the parameters of a stochastic model based on experimental observations is central to the scientific method.
1 code implementation • 22 Dec 2020 • Pascal Leroy, Damien Ernst, Pierre Geurts, Gilles Louppe, Jonathan Pisane, Matthia Sabatelli
This paper introduces four new algorithms that can be used for tackling multi-agent reinforcement learning (MARL) problems occurring in cooperative settings.
1 code implementation • 30 Nov 2020 • Joeri Hermans, Nilanjan Banik, Christoph Weniger, Gianfranco Bertone, Gilles Louppe
A statistical analysis of the observed perturbations in the density of stellar streams can in principle set stringent contraints on the mass function of dark matter subhaloes, which in turn can be used to constrain the mass of the dark matter particle.
1 code implementation • 27 Nov 2020 • Benjamin Kurt Miller, Alex Cole, Gilles Louppe, Christoph Weniger
We present algorithms (a) for nested neural likelihood-to-evidence ratio estimation, and (b) for simulation reuse via an inhomogeneous Poisson point process cache of parameters and corresponding simulations.
1 code implementation • 11 Nov 2020 • Maxime Vandegar, Michael Kagan, Antoine Wehenkel, Gilles Louppe
We revisit empirical Bayes in the absence of a tractable likelihood function, as is typical in scientific domains relying on computer simulations.
no code implementations • 24 Oct 2020 • Arnaud Delaunoy, Antoine Wehenkel, Tanja Hinderer, Samaya Nissanke, Christoph Weniger, Andrew R. Williamson, Gilles Louppe
Gravitational waves from compact binaries measured by the LIGO and Virgo detectors are routinely analyzed using Markov Chain Monte Carlo sampling algorithms.
3 code implementations • 3 Jun 2020 • Antoine Wehenkel, Gilles Louppe
From this new perspective, we propose the graphical normalizing flow, a new invertible transformation with either a prescribed or a learnable graphical structure.
no code implementations • 1 Jun 2020 • Antoine Wehenkel, Gilles Louppe
Normalizing flows have emerged as an important family of deep neural networks for modelling complex probability distributions.
no code implementations • 4 Nov 2019 • Kyle Cranmer, Johann Brehmer, Gilles Louppe
Many domains of science have developed complex simulations to describe phenomena of interest.
3 code implementations • 4 Sep 2019 • Johann Brehmer, Siddharth Mishra-Sharma, Joeri Hermans, Gilles Louppe, Kyle Cranmer
The subtle and unique imprint of dark matter substructure on extended arcs in strong lensing systems contains a wealth of information about the properties and distribution of dark matter on small scales and, consequently, about the underlying particle physics.
3 code implementations • 1 Sep 2019 • Matthia Sabatelli, Gilles Louppe, Pierre Geurts, Marco A. Wiering
This paper makes one step forward towards characterizing a new family of \textit{model-free} Deep Reinforcement Learning (DRL) algorithms.
2 code implementations • NeurIPS 2019 • Antoine Wehenkel, Gilles Louppe
Monotonic neural networks have recently been proposed as a way to define invertible transformations.
3 code implementations • 8 Jul 2019 • Atılım Güneş Baydin, Lei Shao, Wahid Bhimji, Lukas Heinrich, Lawrence Meadows, Jialin Liu, Andreas Munk, Saeid Naderiparizi, Bradley Gram-Hansen, Gilles Louppe, Mingfei Ma, Xiaohui Zhao, Philip Torr, Victor Lee, Kyle Cranmer, Prabhat, Frank Wood
Probabilistic programming languages (PPLs) are receiving widespread attention for performing Bayesian inference in complex generative models.
no code implementations • 4 Jun 2019 • Johann Brehmer, Kyle Cranmer, Irina Espejo, Felix Kling, Gilles Louppe, Juan Pavez
One major challenge for the legacy measurements at the LHC is that the likelihood function is not tractable when the collected data is high-dimensional and the detector response has to be modeled.
5 code implementations • ICML 2020 • Joeri Hermans, Volodimir Begy, Gilles Louppe
This work introduces a novel approach to address the intractability of the likelihood and the marginal model.
1 code implementation • 30 Nov 2018 • Arthur Pesah, Antoine Wehenkel, Gilles Louppe
Likelihood-free inference is concerned with the estimation of the parameters of a non-differentiable stochastic simulator that best reproduce real observations.
3 code implementations • 30 Sep 2018 • Matthia Sabatelli, Gilles Louppe, Pierre Geurts, Marco A. Wiering
We introduce a novel Deep Reinforcement Learning (DRL) algorithm called Deep Quality-Value (DQV) Learning.
no code implementations • 2 Aug 2018 • Markus Stoye, Johann Brehmer, Gilles Louppe, Juan Pavez, Kyle Cranmer
We extend recent work (Brehmer, et.
3 code implementations • NeurIPS 2019 • Atılım Güneş Baydin, Lukas Heinrich, Wahid Bhimji, Lei Shao, Saeid Naderiparizi, Andreas Munk, Jialin Liu, Bradley Gram-Hansen, Gilles Louppe, Lawrence Meadows, Philip Torr, Victor Lee, Prabhat, Kyle Cranmer, Frank Wood
We present a novel probabilistic programming framework that couples directly to existing large-scale simulators through a cross-platform probabilistic execution protocol, which allows general-purpose inference engines to record and control random number draws within simulators in a language-agnostic way.
no code implementations • 8 Jul 2018 • Kim Albertsson, Piero Altoe, Dustin Anderson, John Anderson, Michael Andrews, Juan Pedro Araque Espinosa, Adam Aurisano, Laurent Basara, Adrian Bevan, Wahid Bhimji, Daniele Bonacorsi, Bjorn Burkle, Paolo Calafiura, Mario Campanelli, Louis Capps, Federico Carminati, Stefano Carrazza, Yi-fan Chen, Taylor Childers, Yann Coadou, Elias Coniavitis, Kyle Cranmer, Claire David, Douglas Davis, Andrea De Simone, Javier Duarte, Martin Erdmann, Jonas Eschle, Amir Farbin, Matthew Feickert, Nuno Filipe Castro, Conor Fitzpatrick, Michele Floris, Alessandra Forti, Jordi Garra-Tico, Jochen Gemmler, Maria Girone, Paul Glaysher, Sergei Gleyzer, Vladimir Gligorov, Tobias Golling, Jonas Graw, Lindsey Gray, Dick Greenwood, Thomas Hacker, John Harvey, Benedikt Hegner, Lukas Heinrich, Ulrich Heintz, Ben Hooberman, Johannes Junggeburth, Michael Kagan, Meghan Kane, Konstantin Kanishchev, Przemysław Karpiński, Zahari Kassabov, Gaurav Kaul, Dorian Kcira, Thomas Keck, Alexei Klimentov, Jim Kowalkowski, Luke Kreczko, Alexander Kurepin, Rob Kutschke, Valentin Kuznetsov, Nicolas Köhler, Igor Lakomov, Kevin Lannon, Mario Lassnig, Antonio Limosani, Gilles Louppe, Aashrita Mangu, Pere Mato, Narain Meenakshi, Helge Meinhard, Dario Menasce, Lorenzo Moneta, Seth Moortgat, Mark Neubauer, Harvey Newman, Sydney Otten, Hans Pabst, Michela Paganini, Manfred Paulini, Gabriel Perdue, Uzziel Perez, Attilio Picazio, Jim Pivarski, Harrison Prosper, Fernanda Psihas, Alexander Radovic, Ryan Reece, Aurelius Rinkevicius, Eduardo Rodrigues, Jamal Rorie, David Rousseau, Aaron Sauers, Steven Schramm, Ariel Schwartzman, Horst Severini, Paul Seyfert, Filip Siroky, Konstantin Skazytkin, Mike Sokoloff, Graeme Stewart, Bob Stienen, Ian Stockdale, Giles Strong, Wei Sun, Savannah Thais, Karen Tomko, Eli Upfal, Emanuele Usai, Andrey Ustyuzhanin, Martin Vala, Justin Vasel, Sofia Vallecorsa, Mauro Verzetti, Xavier Vilasís-Cardona, Jean-Roch Vlimant, Ilija Vukotic, Sean-Jiun Wang, Gordon Watts, Michael Williams, Wenjing Wu, Stefan Wunsch, Kun Yang, Omar Zapata
In this document we discuss promising future research and development areas for machine learning in particle physics.
BIG-bench Machine Learning Vocal Bursts Intensity Prediction
5 code implementations • 30 May 2018 • Johann Brehmer, Gilles Louppe, Juan Pavez, Kyle Cranmer
Simulators often provide the best description of real-world phenomena.
2 code implementations • 22 May 2018 • Joeri Hermans, Gilles Louppe
Distributed asynchronous SGD has become widely used for deep learning in large-scale systems, but remains notorious for its instability when increasing the number of workers.
1 code implementation • 30 Apr 2018 • Johann Brehmer, Kyle Cranmer, Gilles Louppe, Juan Pavez
We present powerful new analysis techniques to constrain effective field theories at the LHC.
2 code implementations • 30 Apr 2018 • Johann Brehmer, Kyle Cranmer, Gilles Louppe, Juan Pavez
We develop, discuss, and compare several inference techniques to constrain theory parameters in collider experiments.
no code implementations • 21 Dec 2017 • Mario Lezcano Casado, Atilim Gunes Baydin, David Martinez Rubio, Tuan Anh Le, Frank Wood, Lukas Heinrich, Gilles Louppe, Kyle Cranmer, Karen Ng, Wahid Bhimji, Prabhat
We consider the problem of Bayesian inference in the family of probabilistic models implicitly defined by stochastic generative models of data.
no code implementations • 4 Sep 2017 • Antonio Sutera, Célia Châtel, Gilles Louppe, Louis Wehenkel, Pierre Geurts
Dealing with datasets of very high dimension is a major challenge in machine learning.
2 code implementations • 22 Jul 2017 • Gilles Louppe, Joeri Hermans, Kyle Cranmer
We adapt the training procedure of generative adversarial networks by replacing the differentiable generative network with a domain-specific simulator.
5 code implementations • 2 Feb 2017 • Gilles Louppe, Kyunghyun Cho, Cyril Becot, Kyle Cranmer
Recent progress in applying machine learning for jet physics has been built upon an analogy between calorimeters and images.
5 code implementations • NeurIPS 2017 • Gilles Louppe, Michael Kagan, Kyle Cranmer
Several techniques for domain adaptation have been proposed to account for differences in the distribution of the data used for training and testing.
no code implementations • 12 May 2016 • Antonio Sutera, Gilles Louppe, Vân Anh Huynh-Thu, Louis Wehenkel, Pierre Geurts
In many cases, feature selection is often more complicated than identifying a single subset of input variables that would together explain the output.
1 code implementation • 31 Aug 2015 • Gilles Louppe, Hussein Al-Natsheh, Mateusz Susik, Eamonn Maguire
Author name disambiguation in bibliographic databases is the problem of grouping together scientific publications written by the same person, accounting for potential homonyms and/or synonyms.
2 code implementations • 6 Jun 2015 • Kyle Cranmer, Juan Pavez, Gilles Louppe
This leads to a new machine learning-based approach to likelihood-free inference that is complementary to Approximate Bayesian Computation, and which does not require a prior on the model parameters.
2 code implementations • 28 Jul 2014 • Gilles Louppe
In the second part of this work, we analyse and discuss the interpretability of random forests in the eyes of variable importance measures.
1 code implementation • 30 Jun 2014 • Antonio Sutera, Arnaud Joly, Vincent François-Lavet, Zixiao Aaron Qiu, Gilles Louppe, Damien Ernst, Pierre Geurts
In this work, we propose a simple yet effective solution to the problem of connectome inference in calcium imaging data.
no code implementations • NeurIPS 2013 • Gilles Louppe, Louis Wehenkel, Antonio Sutera, Pierre Geurts
Despite growing interest and practical use in various scientific areas, variable importances derived from tree-based ensemble methods are not well understood from a theoretical point of view.
4 code implementations • 1 Sep 2013 • Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Prettenhofer, Alexandre Gramfort, Jaques Grobler, Robert Layton, Jake Vanderplas, Arnaud Joly, Brian Holt, Gaël Varoquaux
Scikit-learn is an increasingly popular machine learning li- brary.
3 code implementations • 2 Jan 2012 • Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Andreas Müller, Joel Nothman, Gilles Louppe, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, Édouard Duchesnay
Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems.