no code implementations • 30 Jan 2024 • Jian-Qiao Zhu, Haijiang Yan, Thomas L. Griffiths
Simulating sampling algorithms with people has proven a useful method for efficiently probing and understanding their mental representations.
no code implementations • 30 Jan 2024 • Jian-Qiao Zhu, Thomas L. Griffiths
Autoregressive Large Language Models (LLMs) trained for next-word prediction have demonstrated remarkable proficiency at producing coherent text.
no code implementations • 21 Dec 2023 • Liyi Zhang, R. Thomas McCoy, Theodore R. Sumers, Jian-Qiao Zhu, Thomas L. Griffiths
Large language models (LLMs) can produce long, coherent passages of text, suggesting that LLMs, although trained on next-word prediction, must represent the latent structure that characterizes a document.
no code implementations • 16 Nov 2023 • Thomas L. Griffiths, Jian-Qiao Zhu, Erin Grant, R. Thomas McCoy
The success of methods based on artificial neural networks in creating intelligent machines seems like it might pose a challenge to explanations of human cognition in terms of Bayesian inference.
no code implementations • NeurIPS 2018 • Jian-Qiao Zhu, Adam N. Sanborn, Nick Chater
We propose that mental sampling is not done by simple MCMC, but is instead adapted to multimodal representations and is implemented by Metropolis-coupled Markov chain Monte Carlo (MC$^3$), one of the first algorithms developed for sampling from multimodal distributions.