How You Prompt Matters! Even Task-Oriented Constraints in Instructions Affect LLM-Generated Text Detection

14 Nov 2023  ·  Ryuto Koike, Masahiro Kaneko, Naoaki Okazaki ·

To combat the misuse of Large Language Models (LLMs), many recent studies have presented LLM-generated-text detectors with promising performance. When users instruct LLMs to generate texts, the instruction can include different constraints depending on the user's need. However, most recent studies do not cover such diverse instruction patterns when creating datasets for LLM detection. In this paper, we find that even task-oriented constraints -- constraints that would naturally be included in an instruction and are not related to detection-evasion -- cause existing detectors to have a large variance in detection performance. We focus on student essay writing as a realistic domain and manually create task-oriented constraints based on several factors for essay quality. Our experiments show that the standard deviation (SD) of current detector performance on texts generated by an instruction with such a constraint is significantly larger (up to an SD of 14.4 F1-score) than that by generating texts multiple times or paraphrasing the instruction. Furthermore, our analysis indicates that the high instruction-following ability of LLMs fosters the large impact of such constraints on detection performance.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods