1 code implementation • 23 Jul 2023 • Jannik Kossen, Yarin Gal, Tom Rainforth
The predictions of Large Language Models (LLMs) on downstream tasks often improve significantly when including examples of the input--label relationship in the context.
1 code implementation • NeurIPS 2023 • Jannik Kossen, Mark Collier, Basil Mustafa, Xiao Wang, Xiaohua Zhai, Lucas Beyer, Andreas Steiner, Jesse Berent, Rodolphe Jenatton, Efi Kokiopoulou
With 3T, we propose a more flexible strategy that allows the image tower to benefit from both pretrained embeddings and contrastive training.
no code implementations • 9 Nov 2022 • Jannik Kossen, Cătălina Cangea, Eszter Vértes, Andrew Jaegle, Viorica Patraucean, Ira Ktena, Nenad Tomasev, Danielle Belgrave
We introduce a challenging decision-making task that we call active acquisition for multimodal temporal data (A2MT).
no code implementations • 18 May 2022 • Andreas Kirsch, Jannik Kossen, Yarin Gal
They are more realistic than previously suggested ones, building on work by Wen et al. (2021) and Osband et al. (2022), and focus on evaluating the performance of approximate BNNs in an online supervised setting.
1 code implementation • 14 Feb 2022 • Jannik Kossen, Sebastian Farquhar, Yarin Gal, Tom Rainforth
We propose Active Surrogate Estimators (ASEs), a new method for label-efficient model evaluation.
3 code implementations • NeurIPS 2021 • Jannik Kossen, Neil Band, Clare Lyle, Aidan N. Gomez, Tom Rainforth, Yarin Gal
We challenge a common assumption underlying most supervised deep learning: that a model makes a prediction depending only on its parameters and the features of a single input.
1 code implementation • 9 Mar 2021 • Jannik Kossen, Sebastian Farquhar, Yarin Gal, Tom Rainforth
While approaches like active learning reduce the number of labels needed for model training, existing literature largely ignores the cost of labeling test data, typically unrealistically assuming large test sets for model evaluation.
1 code implementation • ICLR 2020 • Jannik Kossen, Karl Stelzner, Marcel Hussing, Claas Voelcker, Kristian Kersting
When humans observe a physical system, they can easily locate objects, understand their interactions, and anticipate future behavior, even in settings with complicated and previously unseen interactions.