no code implementations • Findings (ACL) 2022 • Takateru Yamakoshi, Thomas Griffiths, Robert Hawkins
Sampling is a promising bottom-up method for exposing what generative models have learned about language, but it remains unclear how to generate representative samples from popular masked language models (MLMs) like BERT.
1 code implementation • 6 Jun 2023 • Takateru Yamakoshi, James L. McClelland, Adele E. Goldberg, Robert D. Hawkins
Accounts of human language processing have long appealed to implicit ``situation models'' that enrich comprehension with relevant but unstated world knowledge.
1 code implementation • 24 Feb 2022 • Takateru Yamakoshi, Thomas L. Griffiths, Robert D. Hawkins
Sampling is a promising bottom-up method for exposing what generative models have learned about language, but it remains unclear how to generate representative samples from popular masked language models (MLMs) like BERT.
1 code implementation • EMNLP 2020 • Robert D. Hawkins, Takateru Yamakoshi, Thomas L. Griffiths, Adele E. Goldberg
Languages typically provide more than one grammatical construction to express certain types of messages.