Objective-Reinforced Generative Adversarial Networks (ORGAN) for Sequence Generation Models

In unsupervised data generation tasks, besides the generation of a sample based on previous observations, one would often like to give hints to the model in order to bias the generation towards desirable metrics. We propose a method that combines Generative Adversarial Networks (GANs) and reinforcement learning (RL) in order to accomplish exactly that. While RL biases the data generation process towards arbitrary metrics, the GAN component of the reward function ensures that the model still remembers information learned from data. We build upon previous results that incorporated GANs and RL in order to generate sequence data and test this model in several settings for the generation of molecules encoded as text sequences (SMILES) and in the context of music generation, showing for each case that we can effectively bias the generation process towards desired metrics.

PDF Abstract

Datasets


Results from the Paper


 Ranked #1 on Molecular Graph Generation on ZINC (QED Top-3 metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Molecular Graph Generation ZINC ORGAN QED Top-3 0.896, 0.824, 0.820 # 1
PlogP Top-3 3.63, 3.49, 3.44 # 1

Methods