Search Results for author: Pierre-Alexandre Kamienny

Found 10 papers, 5 papers with code

Controllable Neural Symbolic Regression

no code implementations20 Apr 2023 Tommaso Bendinelli, Luca Biggio, Pierre-Alexandre Kamienny

In symbolic regression, the goal is to find an analytical expression that accurately fits experimental data with the minimal use of mathematical symbols such as operators, variables, and constants.

Evolutionary Algorithms regression +1

End-to-end symbolic regression with transformers

3 code implementations22 Apr 2022 Pierre-Alexandre Kamienny, Stéphane d'Ascoli, Guillaume Lample, François Charton

Symbolic regression, the task of predicting the mathematical expression of a function from the observation of its values, is a difficult task which usually involves a two-step procedure: predicting the "skeleton" of the expression up to the choice of numerical constants, then fitting the constants by optimizing a non-convex loss function.

regression Symbolic Regression

Deep Symbolic Regression for Recurrent Sequences

no code implementations12 Jan 2022 Stéphane d'Ascoli, Pierre-Alexandre Kamienny, Guillaume Lample, François Charton

Symbolic regression, i. e. predicting a function from the observation of its values, is well-known to be a challenging task.

regression Symbolic Regression

SaLinA: Sequential Learning of Agents

1 code implementation15 Oct 2021 Ludovic Denoyer, Alfredo De la Fuente, Song Duong, Jean-Baptiste Gaya, Pierre-Alexandre Kamienny, Daniel H. Thompson

SaLinA is a simple library that makes implementing complex sequential learning models easy, including reinforcement learning algorithms.

reinforcement-learning Reinforcement Learning (RL)

FACMAC: Factored Multi-Agent Centralised Policy Gradients

3 code implementations NeurIPS 2021 Bei Peng, Tabish Rashid, Christian A. Schroeder de Witt, Pierre-Alexandre Kamienny, Philip H. S. Torr, Wendelin Böhmer, Shimon Whiteson

We propose FACtored Multi-Agent Centralised policy gradients (FACMAC), a new method for cooperative multi-agent reinforcement learning in both discrete and continuous action spaces.

Q-Learning SMAC +2

Cannot find the paper you are looking for? You can Submit a new open access paper.