Search Results for author: Jiri Hron

Found 16 papers, 6 papers with code

Optimising Human-Machine Collaboration for Efficient High-Precision Information Extraction from Text Documents

no code implementations18 Feb 2023 Bradley Butcher, Miri Zilka, Darren Cook, Jiri Hron, Adrian Weller

We argue for the utility of a human-in-the-loop approach in applications where high precision is required, but purely manual extraction is infeasible.

Modeling Content Creator Incentives on Algorithm-Curated Platforms

no code implementations27 Jun 2022 Jiri Hron, Karl Krauth, Michael I. Jordan, Niki Kilbertus, Sarah Dean

To this end, we propose tools for numerically finding equilibria in exposure games, and illustrate results of an audit on the MovieLens and LastFM datasets.

Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling

no code implementations15 Jun 2022 Jiri Hron, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein

We introduce repriorisation, a data-dependent reparameterisation which transforms a Bayesian neural network (BNN) posterior to a distribution whose KL divergence to the BNN prior vanishes as layer widths grow.

On component interactions in two-stage recommender systems

no code implementations28 Jun 2021 Jiri Hron, Karl Krauth, Michael I. Jordan, Niki Kilbertus

Thanks to their scalability, two-stage recommenders are used by many of today's largest online platforms, including YouTube, LinkedIn, and Pinterest.

Recommendation Systems Vocal Bursts Valence Prediction

Infinite attention: NNGP and NTK for deep attention networks

1 code implementation ICML 2020 Jiri Hron, Yasaman Bahri, Jascha Sohl-Dickstein, Roman Novak

There is a growing amount of literature on the relationship between wide neural networks (NNs) and Gaussian processes (GPs), identifying an equivalence between the two for a variety of NN architectures.

Deep Attention Gaussian Processes

Exact posterior distributions of wide Bayesian neural networks

1 code implementation18 Jun 2020 Jiri Hron, Yasaman Bahri, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein

Recent work has shown that the prior over functions induced by a deep Bayesian neural network (BNN) behaves as a Gaussian process (GP) as the width of all layers becomes large.

Variational Bayesian dropout: pitfalls and fixes

no code implementations ICML 2018 Jiri Hron, Alexander G. de G. Matthews, Zoubin Ghahramani

Dropout, a stochastic regularisation technique for training of neural networks, has recently been reinterpreted as a specific type of approximate inference algorithm for Bayesian neural networks.

Gaussian Process Behaviour in Wide Deep Neural Networks

2 code implementations ICLR 2018 Alexander G. de G. Matthews, Mark Rowland, Jiri Hron, Richard E. Turner, Zoubin Ghahramani

Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties.

Gaussian Processes

Variational Gaussian Dropout is not Bayesian

no code implementations8 Nov 2017 Jiri Hron, Alexander G. de G. Matthews, Zoubin Ghahramani

Gaussian multiplicative noise is commonly used as a stochastic regularisation technique in training of deterministic neural networks.

Bayesian Inference

Concrete Dropout

5 code implementations NeurIPS 2017 Yarin Gal, Jiri Hron, Alex Kendall

Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks.

Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.