Search Results for author: Jian-Qiao Zhu

Found 5 papers, 0 papers with code

Recovering Mental Representations from Large Language Models with Markov Chain Monte Carlo

no code implementations30 Jan 2024 Jian-Qiao Zhu, Haijiang Yan, Thomas L. Griffiths

Simulating sampling algorithms with people has proven a useful method for efficiently probing and understanding their mental representations.

Bayesian Inference

Incoherent Probability Judgments in Large Language Models

no code implementations30 Jan 2024 Jian-Qiao Zhu, Thomas L. Griffiths

Autoregressive Large Language Models (LLMs) trained for next-word prediction have demonstrated remarkable proficiency at producing coherent text.

Bayesian Inference

Deep de Finetti: Recovering Topic Distributions from Large Language Models

no code implementations21 Dec 2023 Liyi Zhang, R. Thomas McCoy, Theodore R. Sumers, Jian-Qiao Zhu, Thomas L. Griffiths

Large language models (LLMs) can produce long, coherent passages of text, suggesting that LLMs, although trained on next-word prediction, must represent the latent structure that characterizes a document.

Bayesian Inference

Bayes in the age of intelligent machines

no code implementations16 Nov 2023 Thomas L. Griffiths, Jian-Qiao Zhu, Erin Grant, R. Thomas McCoy

The success of methods based on artificial neural networks in creating intelligent machines seems like it might pose a challenge to explanations of human cognition in terms of Bayesian inference.

Bayesian Inference

Mental Sampling in Multimodal Representations

no code implementations NeurIPS 2018 Jian-Qiao Zhu, Adam N. Sanborn, Nick Chater

We propose that mental sampling is not done by simple MCMC, but is instead adapted to multimodal representations and is implemented by Metropolis-coupled Markov chain Monte Carlo (MC$^3$), one of the first algorithms developed for sampling from multimodal distributions.

Cannot find the paper you are looking for? You can Submit a new open access paper.