no code implementations • 9 May 2024 • Tianji Cai, Garrett W. Merz, François Charton, Niklas Nolte, Matthias Wilhelm, Kyle Cranmer, Lance J. Dixon
We pursue the use of deep learning methods to improve state-of-the-art computations in theoretical high-energy physics.
no code implementations • 2 Feb 2024 • Samuel Stevens, Emily Wenger, Cathy Li, Niklas Nolte, Eshika Saxena, François Charton, Kristin Lauter
Our architecture improvements enable scaling to larger-dimension LWE problems: this work is the first instance of ML attacks recovering sparse binary secrets in dimension $n=1024$, the smallest dimension used in practice for homomorphic encryption applications of LWE where sparse binary secrets are proposed.
1 code implementation • 29 Aug 2023 • François Charton
Models trained from uniform operands only learn a handful of GCD (up to $38$ GCD $\leq100$).
no code implementations • 27 Jun 2023 • Samy Jelassi, Stéphane d'Ascoli, Carles Domingo-Enrich, Yuhuai Wu, Yuanzhi Li, François Charton
We find that relative position embeddings enable length generalization for simple tasks, such as addition: models trained on $5$-digit numbers can perform $15$-digit sums.
no code implementations • 21 Dec 2022 • Chris Lengerich, Gabriel Synnaeve, Amy Zhang, Hugh Leather, Kurt Shuster, François Charton, Charysse Redwood
Traditional approaches to RL have focused on learning decision policies directly from episodic decisions, while slowly and implicitly learning the semantics of compositional representations needed for generalization.
1 code implementation • 31 Oct 2022 • François Charton
This paper investigates the failure cases and out-of-distribution behavior of transformers trained on matrix inversion and eigenvalue decomposition.
no code implementations • 11 Jul 2022 • Emily Wenger, Mingjie Chen, François Charton, Kristin Lauter
Currently deployed public-key cryptosystems will be vulnerable to attacks by full-scale quantum computers.
3 code implementations • 22 Apr 2022 • Pierre-Alexandre Kamienny, Stéphane d'Ascoli, Guillaume Lample, François Charton
Symbolic regression, the task of predicting the mathematical expression of a function from the observation of its values, is a difficult task which usually involves a two-step procedure: predicting the "skeleton" of the expression up to the choice of numerical constants, then fitting the constants by optimizing a non-convex loss function.
no code implementations • 12 Jan 2022 • Stéphane d'Ascoli, Pierre-Alexandre Kamienny, Guillaume Lample, François Charton
Symbolic regression, i. e. predicting a function from the observation of its values, is well-known to be a challenging task.
no code implementations • 7 Dec 2021 • François Charton, Amaury Hayat, Sean T. McQuade, Nathaniel J. Merrill, Benedetto Piccoli
We show that deep learning models, and especially architectures like the Transformer, originally intended for natural language, can be trained on randomly generated datasets to predict to very high accuracy both the qualitative and quantitative features of metabolic networks.
1 code implementation • 3 Dec 2021 • François Charton
Transformers can learn to perform numerical computations from examples only.
1 code implementation • ICLR 2021 • François Charton, Amaury Hayat, Guillaume Lample
Using transformers over large generated datasets, we train models to learn mathematical properties of differential systems, such as local stability, behavior at infinity and controllability.
7 code implementations • ICLR 2020 • Guillaume Lample, François Charton
Neural networks have a reputation for being better at solving statistical or approximate problems than at performing calculations or working with symbolic data.