What Makes A Good Story? Designing Composite Rewards for Visual Storytelling

11 Sep 2019  ·  Junjie Hu, Yu Cheng, Zhe Gan, Jingjing Liu, Jianfeng Gao, Graham Neubig ·

Previous storytelling approaches mostly focused on optimizing traditional metrics such as BLEU, ROUGE and CIDEr. In this paper, we re-examine this problem from a different angle, by looking deep into what defines a realistically-natural and topically-coherent story. To this end, we propose three assessment criteria: relevance, coherence and expressiveness, which we observe through empirical analysis could constitute a "high-quality" story to the human eye. Following this quality guideline, we propose a reinforcement learning framework, ReCo-RL, with reward functions designed to capture the essence of these quality criteria. Experiments on the Visual Storytelling Dataset (VIST) with both automatic and human evaluations demonstrate that our ReCo-RL model achieves better performance than state-of-the-art baselines on both traditional metrics and the proposed new criteria.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Storytelling VIST BLEU-RL BLEU-4 14.4 # 10
METEOR 35.2 # 18
CIDEr 6.7 # 26
ROUGE-L 30.1 # 11
SPICE 8.3 # 4
Visual Storytelling VIST HSRL BLEU-4 9.8 # 26
METEOR 30.1 # 31
CIDEr 5.9 # 27
ROUGE-L 25.1 # 26
SPICE 7.5 # 6
Visual Storytelling VIST AREL BLEU-4 13.6 # 18
METEOR 35.2 # 18
CIDEr 9.1 # 15
ROUGE-L 29.3 # 24
SPICE 8.9 # 2
Visual Storytelling VIST MLE BLEU-4 14.3 # 11
METEOR 34.8 # 23
CIDEr 7.2 # 25
ROUGE-L 30 # 13
SPICE 8.5 # 3
Visual Storytelling VIST ReCo-RL BLEU-4 12.4 # 23
METEOR 33.9 # 28
CIDEr 8.6 # 19
ROUGE-L 29.9 # 15
SPICE 8.3 # 4

Methods


No methods listed for this paper. Add relevant methods here