no code implementations • 12 Mar 2024 • Sahand Sharifzadeh, Christos Kaplanis, Shreya Pathak, Dharshan Kumaran, Anastasija Ilic, Jovana Mitrovic, Charles Blundell, Andrea Banino
The creation of high-quality human-labeled image-caption datasets presents a significant bottleneck in the development of Visual-Language Models (VLMs).
no code implementations • 20 Feb 2023 • Beatrice Bevilacqua, Kyriacos Nikiforou, Borja Ibarz, Ioana Bica, Michela Paganini, Charles Blundell, Jovana Mitrovic, Petar Veličković
We evaluate our method on the CLRS algorithmic reasoning benchmark, where we show up to 3$\times$ improvements on the OOD test data.
2 code implementations • 12 Jan 2023 • Matko Bošnjak, Pierre H. Richemond, Nenad Tomasev, Florian Strub, Jacob C. Walker, Felix Hill, Lars Holger Buesing, Razvan Pascanu, Charles Blundell, Jovana Mitrovic
We propose a new semi-supervised learning method, Semantic Positives via Pseudo-Labels (SemPPL), that combines labelled and unlabelled data to learn informative representations.
no code implementations • 13 Jan 2022 • Nenad Tomasev, Ioana Bica, Brian McWilliams, Lars Buesing, Razvan Pascanu, Charles Blundell, Jovana Mitrovic
Most notably, ReLICv2 is the first unsupervised representation learning method to consistently outperform the supervised baseline in a like-for-like comparison over a range of ResNet architectures.
Ranked #14 on Semantic Segmentation on PASCAL VOC 2012 val
Representation Learning Self-Supervised Image Classification +3
2 code implementations • ICML Workshop URL 2021 • Andrea Banino, Adrià Puidomenech Badia, Jacob Walker, Tim Scholtes, Jovana Mitrovic, Charles Blundell
Many reinforcement learning (RL) agents require a large amount of experience to solve tasks.
no code implementations • NeurIPS Workshop ICBINB 2020 • Jovana Mitrovic, Brian McWilliams, Melanie Rey
Usually the other datapoints in the batch serve as the negatives for the given datapoint.
2 code implementations • 15 Oct 2020 • Jovana Mitrovic, Brian McWilliams, Jacob Walker, Lars Buesing, Charles Blundell
Self-supervised learning has emerged as a strategy to reduce the reliance on costly supervised signal by pretraining representations only using unlabeled data.
Ranked #77 on Self-Supervised Image Classification on ImageNet
no code implementations • 21 Aug 2020 • Nan Rosemary Ke, Jane. X. Wang, Jovana Mitrovic, Martin Szummer, Danilo J. Rezende
The CRN represent causal models using continuous representations and hence could scale much better with the number of variables.
no code implementations • 7 Feb 2020 • Danilo J. Rezende, Ivo Danihelka, George Papamakarios, Nan Rosemary Ke, Ray Jiang, Theophane Weber, Karol Gregor, Hamza Merzic, Fabio Viola, Jane Wang, Jovana Mitrovic, Frederic Besse, Ioannis Antonoglou, Lars Buesing
In reinforcement learning, we can learn a model of future observations and rewards, and use it to plan the agent's next actions.
no code implementations • ICLR 2019 • Jovana Mitrovic, Peter Wirnsberger, Charles Blundell, Dino Sejdinovic, Yee Whye Teh
Infinite-width neural networks have been extensively used to study the theoretical properties underlying the extraordinary empirical success of standard, finite-width neural networks.
1 code implementation • ICLR 2019 • Ishita Dasgupta, Jane Wang, Silvia Chiappa, Jovana Mitrovic, Pedro Ortega, David Raposo, Edward Hughes, Peter Battaglia, Matthew Botvinick, Zeb Kurth-Nelson
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents.
no code implementations • NeurIPS 2018 • Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh
Discovering the causal structure among a set of variables is a fundamental problem in many areas of science.
no code implementations • ICLR 2018 • Mohamed Ishmael Belghazi, Sai Rajeswar, Olivier Mastropietro, Negar Rostamzadeh, Jovana Mitrovic, Aaron Courville
We propose a novel hierarchical generative model with a simple Markovian structure and a corresponding inference model.
no code implementations • 15 Feb 2016 • Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh
Approximate Bayesian computation (ABC) is an inference framework that constructs an approximation to the true likelihood based on the similarity between the observed and simulated data as measured by a predefined set of summary statistics.