ICLR 2018

Scalable Private Learning with PATE

ICLR 2018 tensorflow/models

Models and examples built with TensorFlow

Parameter Space Noise for Exploration

ICLR 2018 tensorflow/models

Combining parameter noise with traditional RL methods allows to combine the best of both worlds.

CONTINUOUS CONTROL

Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling

ICLR 2018 tensorflow/models

At the same time, advances in approximate Bayesian methods have made posterior approximation for flexible neural network models practical.

DECISION MAKING MULTI-ARMED BANDITS

Ensemble Adversarial Training: Attacks and Defenses

ICLR 2018 tensorflow/models

We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss.

Depthwise Separable Convolutions for Neural Machine Translation

ICLR 2018 tensorflow/tensor2tensor

In this work, we study how depthwise separable convolutions can be applied to neural machine translation.

MACHINE TRANSLATION

Generating Wikipedia by Summarizing Long Sequences

ICLR 2018 tensorflow/tensor2tensor

We show that generating English Wikipedia articles can be approached as a multi- document summarization of source documents.

DOCUMENT SUMMARIZATION MULTI-DOCUMENT SUMMARIZATION

Discrete Autoencoders for Sequence Models

ICLR 2018 tensorflow/tensor2tensor

We propose to improve the representation in sequence models by augmenting current approaches with an autoencoder that is forced to compress the sequence through an intermediate discrete latent space.

LANGUAGE MODELLING MACHINE TRANSLATION

A Neural Representation of Sketch Drawings

ICLR 2018 googlecreativelab/quickdraw-dataset

We present sketch-rnn, a recurrent neural network (RNN) able to construct stroke-based drawings of common objects.

Word Translation Without Parallel Data

ICLR 2018 facebookresearch/MUSE

We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation.

UNSUPERVISED MACHINE TRANSLATION WORD ALIGNMENT WORD EMBEDDINGS