no code implementations • 21 Mar 2024 • Opher Bar Nathan, Deborah Levy, Tali treibitz, Dan Rosenbaum
Using this prior together with a novel guidance method based on the underwater image formation model, we generate posterior samples of clean images, removing the water effects.
1 code implementation • CVPR 2023 • Deborah Levy, Amit Peleg, Naama Pearl, Dan Rosenbaum, Derya Akkaynak, Simon Korman, Tali treibitz
Even more excitingly, we can render clear views of these scenes, removing the medium between the camera and the scene and reconstructing the appearance and depth of far objects, which are severely occluded by the medium.
no code implementations • 6 Feb 2023 • Matthias Bauer, Emilien Dupont, Andy Brock, Dan Rosenbaum, Jonathan Richard Schwarz, Hyunjik Kim
Neural fields, also known as implicit neural representations, have emerged as a powerful means to represent complex signals of various modalities.
1 code implementation • 28 Jan 2022 • Emilien Dupont, Hyunjik Kim, S. M. Ali Eslami, Danilo Rezende, Dan Rosenbaum
A powerful continuous alternative is then to represent these measurements using an implicit neural representation, a neural function trained to output the appropriate measurement value for any input spatial location.
1 code implementation • 2 Oct 2019 • John F. J. Mellor, Eunbyung Park, Yaroslav Ganin, Igor Babuschkin, tejas kulkarni, Dan Rosenbaum, Andy Ballard, Theophane Weber, Oriol Vinyals, S. M. Ali Eslami
We investigate using reinforcement learning agents as generative models of images (extending arXiv:1804. 01118).
7 code implementations • ICLR 2019 • Hyunjik Kim, andriy mnih, Jonathan Schwarz, Marta Garnelo, Ali Eslami, Dan Rosenbaum, Oriol Vinyals, Yee Whye Teh
Neural Processes (NPs) (Garnelo et al 2018a;b) approach regression by learning to map a context set of observed input-output pairs to a distribution over regression functions.
no code implementations • 4 Jul 2018 • Dan Rosenbaum, Frederic Besse, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami
We consider learning based methods for visual localization that do not require the construction of explicit maps in the form of point clouds or voxels.
17 code implementations • ICML 2018 • Marta Garnelo, Dan Rosenbaum, Chris J. Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo J. Rezende, S. M. Ali Eslami
Deep neural networks excel at function approximation, yet they are typically trained from scratch for each new function.
13 code implementations • 4 Jul 2018 • Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami, Yee Whye Teh
A neural network (NN) is a parameterised function that can be tuned via gradient descent to approximate a labelled collection of data with high precision.
no code implementations • 11 Apr 2016 • Dan Rosenbaum, Yair Weiss
Consistent with current practice, we find that robust versions of gradient constancy are better models than simple brightness constancy but a learned GMM that models the density of patches of warp error gives a much better fit than any existing assumption of constancy.
no code implementations • 11 Apr 2016 • Dan Rosenbaum, Yair Weiss
We then use the generative models together with a degradation model and obtain a Bayes Least Squares (BLS) estimator of the D channel given the RGB channels.
no code implementations • NeurIPS 2015 • Dan Rosenbaum, Yair Weiss
In this paper we show how to combine the strengths of both approaches by training a discriminative, feed-forward architecture to predict the state of latent variables in a generative model of natural images.
no code implementations • 19 Feb 2014 • Alon Gonen, Dan Rosenbaum, Yonina Eldar, Shai Shalev-Shwartz
The goal of subspace learning is to find a $k$-dimensional subspace of $\mathbb{R}^d$, such that the expected squared distance between instance vectors and the subspace is as small as possible.
no code implementations • NeurIPS 2013 • Dan Rosenbaum, Daniel Zoran, Yair Weiss
Motivated by recent progress in natural image statistics, we use newly available datasets with ground truth optical flow to learn the local statistics of optical flow and rigorously compare the learned model to prior models assumed by computer vision optical flow algorithms.