We propose a new benchmark corpus to be used for measuring progress in statistical language modeling.

#14 best model for Language Modelling on One Billion Word

The estimation of an f-divergence between two probability distributions based on samples is a fundamental problem in statistics and machine learning.

In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN).

#26 best model for Machine Translation on WMT2014 English-French

While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing.

The TensorFlow Distributions library implements a vision of probability theory adapted to the modern deep-learning paradigm of end-to-end differentiable computation.

In this context, the MMD may be used in two roles: first, as a discriminator, either directly on the samples, or on features of the samples.

Determinantal point processes (DPPs) are specific probability distributions over clouds of points that are used as models and computational tools across physics, probability, statistics, and more recently machine learning.

By capturing the time dependency through modeling the conditional probability of the event for each sample, our method predicts the likelihood of the true event occurrence and estimates the survival rate over time, i. e., the probability of the non-occurrence of the event, for the censored data.

Generative adversarial networks (GANs) implicitly learn the probability distribution of a dataset and can draw samples from the distribution.

A central problem in machine learning involves modeling complex data-sets using highly flexible families of probability distributions in which learning, sampling, inference, and evaluation are still analytically or computationally tractable.