One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling

11 Dec 20133 code implementations

We propose a new benchmark corpus to be used for measuring progress in statistical language modeling.


Practical and Consistent Estimation of f-Divergences

NeurIPS 2019 1 code implementation

The estimation of an f-divergence between two probability distributions based on samples is a fundamental problem in statistics and machine learning.


Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation

3 Jun 201422 code implementations

In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN).


Scalable End-to-End Autonomous Vehicle Testing via Rare-event Simulation

NeurIPS 2018 1 code implementation

While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing.


TensorFlow Distributions

28 Nov 20175 code implementations

The TensorFlow Distributions library implements a vision of probability theory adapted to the modern deep-learning paradigm of end-to-end differentiable computation.


Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy

14 Nov 20161 code implementation

In this context, the MMD may be used in two roles: first, as a discriminator, either directly on the samples, or on features of the samples.

DPPy: Sampling DPPs with Python

19 Sep 20182 code implementations

Determinantal point processes (DPPs) are specific probability distributions over clouds of points that are used as models and computational tools across physics, probability, statistics, and more recently machine learning.


Deep Recurrent Survival Analysis

7 Sep 20181 code implementation

By capturing the time dependency through modeling the conditional probability of the event for each sample, our method predicts the likelihood of the true event occurrence and estimates the survival rate over time, i. e., the probability of the non-occurrence of the event, for the censored data.


Synthesizing Tabular Data using Generative Adversarial Networks

27 Nov 20183 code implementations

Generative adversarial networks (GANs) implicitly learn the probability distribution of a dataset and can draw samples from the distribution.

Deep Unsupervised Learning using Nonequilibrium Thermodynamics

12 Mar 20152 code implementations

A central problem in machine learning involves modeling complex data-sets using highly flexible families of probability distributions in which learning, sampling, inference, and evaluation are still analytically or computationally tractable.