1 code implementation • 12 Apr 2020 • Veronica Latcinnik, Jonathan Berant
Large pre-trained language models (LMs) have been shown to perform surprisingly well when fine-tuned on tasks that require commonsense and world knowledge.
no code implementations • ACL 2018 • Omer Goldman, Veronica Latcinnik, Ehud Nave, Amir Globerson, Jonathan Berant
Training semantic parsers from weak supervision (denotations) rather than strong supervision (programs) complicates training in two ways.
1 code implementation • 14 Nov 2017 • Omer Goldman, Veronica Latcinnik, Udi Naveh, Amir Globerson, Jonathan Berant
Training semantic parsers from weak supervision (denotations) rather than strong supervision (programs) complicates training in two ways.