Improving Diversity of Neural Text Generation via Inverse Probability Weighting

13 Mar 2021  ·  Xinran Zhang, Maosong Sun, Jiafeng Liu, Xiaobing Li ·

The neural text generation suffers from the text degeneration issue such as repetition. Traditional stochastic sampling methods only focus on truncating the unreliable "tail" of the distribution, and do not address the "head" part, which we show might contain tedious or even repetitive candidates with high probability that lead to repetition loops. They also do not consider the issue that human text does not always favor high-probability words. Inspired by these, in this work we propose a heuristic sampling method. We propose to use interquartile range of the predicted distribution to determine the "head" part, then permutate and rescale the "head" with inverse probability. This aims at decreasing the probability for the tedious and possibly repetitive candidates with higher probability, and increasing the probability for the rational but more surprising candidates with lower probability. The proposed algorithm provides a reasonable permutation on the predicted distribution which enhances diversity without compromising rationality of the distribution. We use pre-trained language model to compare our algorithm with traditional methods. Results show that our algorithm can effectively increase the diversity of generated samples while achieving close resemblance to human text.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here