no code implementations • 17 Nov 2023 • Karl J. Friston, Lancelot Da Costa, Alexander Tschantz, Alex Kiefer, Tommaso Salvatori, Victorita Neacsu, Magnus Koudahl, Conor Heins, Noor Sajid, Dimitrije Markovic, Thomas Parr, Tim Verbelen, Christopher L Buckley
This paper concerns structure learning or discovery of discrete generative models.
no code implementations • 7 Nov 2023 • Poppy Collis, Paul F Kinghorn, Christopher L Buckley
The ability to invent new tools has been identified as an important facet of our ability as a species to problem solve in dynamic and novel environments.
no code implementations • 2 Dec 2022 • Karl J Friston, Maxwell J D Ramstead, Alex B Kiefer, Alexander Tschantz, Christopher L Buckley, Mahault Albarracin, Riddhi J Pitliya, Conor Heins, Brennan Klein, Beren Millidge, Dalton A R Sakthivadivel, Toby St Clere Smithe, Magnus Koudahl, Safae Essafi Tremblay, Capm Petersen, Kaiser Fung, Jason G Fox, Steven Swanson, Dan Mapes, Gabriel René
In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world -- also known as self-evidencing.
no code implementations • 15 Aug 2022 • Paul F Kinghorn, Beren Millidge, Christopher L Buckley
Predictive Coding Networks (PCNs) aim to learn a generative model of the world.
1 code implementation • 20 Jul 2022 • Beren Millidge, Christopher L Buckley
Recent work has uncovered close links between between classical reinforcement learning algorithms, Bayesian filtering, and Active Inference which lets us understand value functions in terms of Bayesian posteriors.
no code implementations • 23 May 2022 • Tomasz Korbak, Ethan Perez, Christopher L Buckley
We show that KL-regularised RL is equivalent to variational inference: approximating a Bayesian posterior which specifies how to update a prior LM to conform with evidence provided by the reward function.
no code implementations • 5 Apr 2022 • Alexander Tschantz, Beren Millidge, Anil K Seth, Christopher L Buckley
This is at odds with evidence that several aspects of visual perception - including complex forms of object recognition - arise from an initial "feedforward sweep" that occurs on fast timescales which preclude substantial recurrent activity.
no code implementations • 30 Aug 2021 • Beren Millidge, Anil Seth, Christopher L Buckley
The Free-Energy-Principle (FEP) is an influential and controversial theory which postulates a deep and powerful connection between the stochastic thermodynamics of self-organization and learning through variational inference.
no code implementations • 27 Jul 2021 • Beren Millidge, Anil Seth, Christopher L Buckley
Predictive coding offers a potentially unifying account of cortical function -- postulating that the core function of the brain is to minimize prediction errors with respect to a generative model of the world.
1 code implementation • 13 Oct 2020 • Beren Millidge, Alexander Tschantz, Anil Seth, Christopher L Buckley
The recently proposed Activation Relaxation (AR) algorithm provides a simple and robust approach for approximating the backpropagation of error algorithm using only local learning rules.
no code implementations • 2 Oct 2020 • Beren Millidge, Alexander Tschantz, Anil Seth, Christopher L Buckley
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs, which underlies both perception and learning, is the minimization of prediction errors.