no code implementations • 13 Feb 2024 • Maurice Diesendruck, Jianzhe Lin, Shima Imani, Gayathri Mahalingam, Mingyang Xu, Jie Zhao
When LLMs perform zero-shot inference, they typically use a prompt with a task specification, and generate a completion.
no code implementations • 1 Sep 2023 • Jianzhe Lin, Maurice Diesendruck, Liang Du, Robin Abraham
We have two initial observations for prompting with batched data.
1 code implementation • 4 Feb 2022 • Leo Betthauser, Urszula Chajewska, Maurice Diesendruck, Rohith Pesala
Rapid progress in representation learning has led to a proliferation of embedding models, and to associated challenges of model selection and practical application.
no code implementations • 7 Jun 2018 • Maurice Diesendruck, Ethan R. Elenberg, Rajat Sen, Guy W. Cole, Sanjay Shakkottai, Sinead A. Williamson
Deep generative networks can simulate from a complex target distribution, by minimizing a loss with respect to samples from that distribution.
no code implementations • ICLR 2018 • Maurice Diesendruck, Guy W. Cole, Sinead Williamson
In this paper, we construct an estimator for the MMD between P and Q when we only have access to P via some biased sample selection mechanism, and suggest methods for estimating this sample selection mechanism when it is not already known.