Search Results for author: Tom Kouwenhoven

Found 5 papers, 1 papers with code

Is Temperature the Creativity Parameter of Large Language Models?

no code implementations1 May 2024 Max Peeperkorn, Tom Kouwenhoven, Dan Brown, Anna Jordanous

Large language models (LLMs) are applied to all sorts of creative tasks, and their outputs vary from beautiful, to peculiar, to pastiche, into plain plagiarism.

Memory-Augmented Generative Adversarial Transformers

no code implementations29 Feb 2024 Stephan Raaijmakers, Roos Bakker, Anita Cremers, Roy de Kleijn, Tom Kouwenhoven, Tessa Verhoef

Conversational AI systems that rely on Large Language Models, like Transformers, have difficulty interweaving external data (like facts) with the language they generate.

Generative Adversarial Network

EduGym: An Environment and Notebook Suite for Reinforcement Learning Education

1 code implementation17 Nov 2023 Thomas M. Moerland, Matthias Müller-Brockhausen, Zhao Yang, Andrius Bernatavicius, Koen Ponse, Tom Kouwenhoven, Andreas Sauter, Michiel van der Meer, Bram Renting, Aske Plaat

To solve this issue we introduce EduGym, a set of educational reinforcement learning environments and associated interactive notebooks tailored for education.

reinforcement-learning

Theory of Mind in Large Language Models: Examining Performance of 11 State-of-the-Art models vs. Children Aged 7-10 on Advanced Tests

no code implementations31 Oct 2023 Max J. van Duijn, Bram M. A. van Dijk, Tom Kouwenhoven, Werner de Valk, Marco R. Spruit, Peter van der Putten

To what degree should we ascribe cognitive capacities to Large Language Models (LLMs), such as the ability to reason about intentions and beliefs known as Theory of Mind (ToM)?

Benchmarking

Cannot find the paper you are looking for? You can Submit a new open access paper.