Improving Image Captioning with Control Signal of Sentence Quality

7 Jun 2022  ·  Zhangzi Zhu, Hong Qu ·

In the dataset of image captioning, each image is aligned with several descriptions. Despite the fact that the quality of these descriptions varies, existing captioning models treat them equally in the training process. In this paper, we propose a new control signal of sentence quality, which is taken as an additional input to the captioning model. By integrating the control signal information, captioning models are aware of the quality level of the target sentences and handle them differently. Moreover, we propose a novel reinforcement training method specially designed for the control signal of sentence quality: Quality-oriented Self-Annotated Training (Q-SAT). Extensive experiments on MSCOCO dataset show that without extra information from ground truth captions, models controlled by the highest quality level outperform baseline models on accuracy-based evaluation metrics, which validates the effectiveness of our proposed methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods