ICML 2018

Efficient Neural Audio Synthesis

ICML 2018 CorentinJ/Real-Time-Voice-Cloning

The small number of weights in a Sparse WaveRNN makes it possible to sample high-fidelity audio on a mobile CPU in real time.

SPEECH SYNTHESIS TEXT-TO-SPEECH SYNTHESIS

RLlib: Abstractions for Distributed Reinforcement Learning

ICML 2018 ray-project/ray

Reinforcement learning (RL) algorithms involve the deep nesting of highly irregular computation patterns, each of which typically exhibits opportunities for distributed computation.

Implicit Quantile Networks for Distributional Reinforcement Learning

ICML 2018 google/dopamine

In this work, we build on recent advances in distributional reinforcement learning to give a generally applicable, flexible, and state-of-the-art distributional variant of DQN.

ATARI GAMES DISTRIBUTIONAL REINFORCEMENT LEARNING

Addressing Function Approximation Error in Actor-Critic Methods

ICML 2018 facebookresearch/Horizon

In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies.

Q-LEARNING

Hierarchical Text Generation and Planning for Strategic Dialogue

ICML 2018 facebookresearch/end-to-end-negotiator

End-to-end models for goal-orientated dialogue are challenging to train, because linguistic and strategic aspects are entangled in latent state vectors.

DECISION MAKING TEXT GENERATION

Which Training Methods for GANs do actually Converge?

ICML 2018 facebookresearch/pytorch_GAN_zoo

In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent.

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

ICML 2018 IBM/AIF360

We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.

Noise2Noise: Learning Image Restoration without Clean Data

ICML 2018 NVlabs/noise2noise

We apply basic statistical reasoning to signal reconstruction by machine learning -- learning to map corrupted observations to clean signals -- with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption.

DENOISING IMAGE RESTORATION

IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures

ICML 2018 deepmind/scalable_agent

In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters.

ATARI GAMES