Search Results for author: Nathan Grinsztajn

Found 12 papers, 5 papers with code

Should we be going MAD? A Look at Multi-Agent Debate Strategies for LLMs

1 code implementation29 Nov 2023 Andries Smit, Paul Duckworth, Nathan Grinsztajn, Thomas D. Barrett, Arnu Pretorius

In this context, multi-agent debate (MAD) has emerged as a promising strategy for enhancing the truthfulness of LLMs.

Benchmarking

Combinatorial Optimization with Policy Adaptation using Latent Space Search

1 code implementation NeurIPS 2023 Felix Chalumeau, Shikha Surana, Clement Bonnet, Nathan Grinsztajn, Arnu Pretorius, Alexandre Laterre, Thomas D. Barrett

Combinatorial Optimization underpins many real-world applications and yet, designing performant algorithms to solve these complex, typically NP-hard, problems remains a significant research challenge.

Benchmarking Combinatorial Optimization +3

Meta-learning from Learning Curves Challenge: Lessons learned from the First Round and Design of the Second Round

no code implementations4 Aug 2022 Manh Hung Nguyen, Lisheng Sun, Nathan Grinsztajn, Isabelle Guyon

With the lessons learned from the first round and the feedback from the participants, we have designed the second round of our challenge with a new protocol and a new meta-dataset.

AutoML Meta-Learning

Interferometric Graph Transform for Community Labeling

no code implementations4 Jun 2021 Nathan Grinsztajn, Louis Leconte, Philippe Preux, Edouard Oyallon

We present a new approach for learning unsupervised node representations in community graphs.

Low-Rank Projections of GCNs Laplacian

no code implementations ICLR Workshop GTRL 2021 Nathan Grinsztajn, Philippe Preux, Edouard Oyallon

In this work, we study the behavior of standard models for community detection under spectral manipulations.

Community Detection

A spectral perspective on GCNs

no code implementations1 Jan 2021 Nathan Grinsztajn, Philippe Preux, Edouard Oyallon

In this work, we study the behavior of standard GCNs under spectral manipulations.

Geometric Deep Reinforcement Learning for Dynamic DAG Scheduling

1 code implementation9 Nov 2020 Nathan Grinsztajn, Olivier Beaumont, Emmanuel Jeannot, Philippe Preux

In this paper, we propose a reinforcement learning approach to solve a realistic scheduling problem, and apply it to an algorithm commonly executed in the high performance computing community, the Cholesky factorization.

Combinatorial Optimization reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.