Search Results for author: Clément Moulin-Frier

Found 28 papers, 14 papers with code

Cultural evolution in populations of Large Language Models

1 code implementation13 Mar 2024 Jérémy Perez, Corentin Léger, Marcela Ovando-Tellez, Chris Foulon, Joan Dussauld, Pierre-Yves Oudeyer, Clément Moulin-Frier

We here present a framework for simulating cultural evolution in populations of LLMs, allowing the manipulation of variables known to be important in cultural evolution, such as network structure, personality, and the way social information is aggregated and transformed.

Discovering Sensorimotor Agency in Cellular Automata using Diversity Search

1 code implementation14 Feb 2024 Gautier Hamon, Mayalen Etcheverry, Bert Wang-Chak Chan, Clément Moulin-Frier, Pierre-Yves Oudeyer

The research field of Artificial Life studies how life-like phenomena such as autopoiesis, agency, or self-regulation can self-organize in computer simulations.

Artificial Life Navigate

Evolving Reservoirs for Meta Reinforcement Learning

1 code implementation9 Dec 2023 Corentin Léger, Gautier Hamon, Eleni Nisioti, Xavier Hinaut, Clément Moulin-Frier

At the developmental scale, we employ these evolved reservoirs to facilitate the learning of a behavioral policy through Reinforcement Learning (RL).

Meta Reinforcement Learning reinforcement-learning +1

Meta-Diversity Search in Complex Systems, A Recipe for Artificial Open-Endedness ?

no code implementations1 Dec 2023 Mayalen Etcheverry, Bert Wang-Chak Chan, Clément Moulin-Frier, Pierre-Yves Oudeyer

Holmes incrementally learns a hierarchy of modular representations to characterize divergent sources of diversity and uses a goal-based intrinsically-motivated exploration as the diversity search strategy.

Emergence of Collective Open-Ended Exploration from Decentralized Meta-Reinforcement Learning

no code implementations1 Nov 2023 Richard Bornemann, Gautier Hamon, Eleni Nisioti, Clément Moulin-Frier

We further find that the agents learned collective exploration strategies extend to an open ended task setting, allowing them to solve task trees of twice the depth compared to the ones seen during training.

Meta Reinforcement Learning reinforcement-learning

SBMLtoODEjax: Efficient Simulation and Optimization of Biological Network Models in JAX

1 code implementation17 Jul 2023 Mayalen Etcheverry, Michael Levin, Clément Moulin-Frier, Pierre-Yves Oudeyer

Advances in bioengineering and biomedicine demand a deep understanding of the dynamic behavior of biological systems, ranging from protein pathways to complex cellular processes.

Dynamics of niche construction in adaptable populations evolving in diverse environments

1 code implementation16 May 2023 Eleni Nisioti, Clément Moulin-Frier

In this work, we study NC in simulation environments that consist of multiple, diverse niches and populations that evolve their plasticity, evolvability and niche-constructing behaviors.

Eco-evolutionary Dynamics of Non-episodic Neuroevolution in Large Multi-agent Environments

1 code implementation18 Feb 2023 Gautier Hamon, Eleni Nisioti, Clément Moulin-Frier

Neuroevolution (NE) has recently proven a competitive alternative to learning by gradient descent in reinforcement learning tasks.

valid

Flow-Lenia: Towards open-ended evolution in cellular automata through mass conservation and parameter localization

1 code implementation14 Dec 2022 Erwan Plantec, Gautier Hamon, Mayalen Etcheverry, Pierre-Yves Oudeyer, Clément Moulin-Frier, Bert Wang-Chak Chan

Finally, we show that Flow Lenia enables the integration of the parameters of the CA update rules within the CA dynamics, making them dynamic and localized, allowing for multi-species simulations, with locally coherent update rules that define properties of the emerging creatures, and that can be mixed with neighbouring rules.

Artificial Life

Contrastive Multimodal Learning for Emergence of Graphical Sensory-Motor Communication

no code implementations3 Oct 2022 Tristan Karch, Yoann Lemesle, Romain Laroche, Clément Moulin-Frier, Pierre-Yves Oudeyer

In this paper, we investigate whether artificial agents can develop a shared language in an ecological setting where communication relies on a sensory-motor channel.

Social Network Structure Shapes Innovation: Experience-sharing in RL with SAPIENS

no code implementations10 Jun 2022 Eleni Nisioti, Mateo Mahaut, Pierre-Yves Oudeyer, Ida Momennejad, Clément Moulin-Frier

Comparing the level of innovation achieved by different social network structures across different tasks shows that, first, consistent with human findings, experience sharing within a dynamic structure achieves the highest level of innovation in tasks with a deceptive nature and large search spaces.

Cultural Vocal Bursts Intensity Prediction Reinforcement Learning (RL)

Language and Culture Internalisation for Human-Like Autotelic AI

no code implementations2 Jun 2022 Cédric Colas, Tristan Karch, Clément Moulin-Frier, Pierre-Yves Oudeyer

Building autonomous agents able to grow open-ended repertoires of skills across their lives is a fundamental goal of artificial intelligence (AI).

Attribute Cultural Vocal Bursts Intensity Prediction

Plasticity and evolvability under environmental variability: the joint role of fitness-based selection and niche-limited competition

1 code implementation17 Feb 2022 Eleni Nisioti, Clément Moulin-Frier

In this work, we study the interplay between environmental dynamics and adaptation in a minimal model of the evolution of plasticity and evolvability.

Artificial Life

Learning to Guide and to Be Guided in the Architect-Builder Problem

1 code implementation ICLR 2022 Paul Barde, Tristan Karch, Derek Nowrouzezahrai, Clément Moulin-Frier, Christopher Pal, Pierre-Yves Oudeyer

ABIG results in a low-level, high-frequency, guiding communication protocol that not only enables an architect-builder pair to solve the task at hand, but that can also generalize to unseen tasks.

Imitation Learning

Socially Supervised Representation Learning: the Role of Subjectivity in Learning Efficient Representations

no code implementations20 Sep 2021 Julius Taylor, Eleni Nisioti, Clément Moulin-Frier

In this work, we propose that aligning internal subjective representations, which naturally arise in a multi-agent setup where agents receive partial observations of the same underlying environmental state, can lead to more data-efficient representations.

Representation Learning Self-Supervised Learning

Grounding Spatio-Temporal Language with Transformers

1 code implementation NeurIPS 2021 Tristan Karch, Laetitia Teodorescu, Katja Hofmann, Clément Moulin-Frier, Pierre-Yves Oudeyer

While there is an extended literature studying how machines can learn grounded language, the topic of how to learn spatio-temporal linguistic concepts is still largely uncharted.

Grounding Artificial Intelligence in the Origins of Human Behavior

no code implementations15 Dec 2020 Eleni Nisioti, Clément Moulin-Frier

Recent advances in Artificial Intelligence (AI) have revived the quest for agents able to acquire an open-ended repertoire of skills.

Reinforcement Learning (RL)

EpidemiOptim: A Toolbox for the Optimization of Control Policies in Epidemiological Models

2 code implementations9 Oct 2020 Cédric Colas, Boris Hejblum, Sébastien Rouillon, Rodolphe Thiébaut, Pierre-Yves Oudeyer, Clément Moulin-Frier, Mélanie Prague

Epidemiologists model the dynamics of epidemics in order to propose control strategies based on pharmaceutical and non-pharmaceutical interventions (contact limitation, lock down, vaccination, etc).

Epidemiology Evolutionary Algorithms +5

Language-Goal Imagination to Foster Creative Exploration in Deep RL

no code implementations ICML Workshop LaReL 2020 Tristan Karch, Nicolas Lair, Cédric Colas, Jean-Michel Dussoux, Clément Moulin-Frier, Peter Ford Dominey, Pierre-Yves Oudeyer

We introduce the Playground environment and study how this form of goal imagination improves generalization and exploration over agents lacking this capacity.

Deep Sets for Generalization in RL

no code implementations20 Mar 2020 Tristan Karch, Cédric Colas, Laetitia Teodorescu, Clément Moulin-Frier, Pierre-Yves Oudeyer

This paper investigates the idea of encoding object-centered representations in the design of the reward function and policy architectures of a language-guided reinforcement learning agent.

Navigate Object +3

Multi-Agent Reinforcement Learning as a Computational Tool for Language Evolution Research: Historical Context and Future Challenges

no code implementations20 Feb 2020 Clément Moulin-Frier, Pierre-Yves Oudeyer

Computational models of emergent communication in agent populations are currently gaining interest in the machine learning community due to recent advances in Multi-Agent Reinforcement Learning (MARL).

BIG-bench Machine Learning Multi-agent Reinforcement Learning +3

Decision Making under Uncertainty: A Quasimetric Approach

no code implementations31 Dec 2013 Steve N'Guyen, Clément Moulin-Frier, Jacques Droulez

Schematically, two main approaches have been followed: either the agent learns which option is the correct one to choose in a given situation by trial and error, or the agent already has some knowledge on the possible consequences of his decisions; this knowledge being generally expressed as a conditional probability distribution.

Decision Making Decision Making Under Uncertainty

Recognizing Speech in a Novel Accent: The Motor Theory of Speech Perception Reframed

no code implementations20 Sep 2013 Clément Moulin-Frier, M. A. Arbib

The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener.

Cannot find the paper you are looking for? You can Submit a new open access paper.