Distributional Discrepancy: A Metric for Unconditional Text Generation

4 May 2020Ping CaiXingyuan ChenPeng JinHongjun WangTianrui Li

The purpose of unconditional text generation is to train a model with real sentences, then generate novel sentences of the same quality and diversity as the training data. However, when different metrics are used for comparing the methods of unconditional text generation, contradictory conclusions are drawn... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper