Search Results for author: Yuang Qi

Found 3 papers, 2 papers with code

Provably Secure Disambiguating Neural Linguistic Steganography

1 code implementation26 Mar 2024 Yuang Qi, Kejiang Chen, Kai Zeng, Weiming Zhang, Nenghai Yu

SyncPool does not change the size of the candidate pool or the distribution of tokens and thus is applicable to provably secure language steganography methods.

Linguistic steganography

LLM Paternity Test: Generated Text Detection with LLM Genetic Inheritance

no code implementations21 May 2023 Xiao Yu, Yuang Qi, Kejiang Chen, Guoqiang Chen, Xi Yang, Pengyuan Zhu, Weiming Zhang, Nenghai Yu

Large language models (LLMs) can generate texts that carry the risk of various misuses, including plagiarism, planting fake reviews on e-commerce platforms, or creating inflammatory false tweets.

Language Modelling Large Language Model +1

Watermarking Text Generated by Black-Box Language Models

1 code implementation14 May 2023 Xi Yang, Kejiang Chen, Weiming Zhang, Chang Liu, Yuang Qi, Jie Zhang, Han Fang, Nenghai Yu

To allow third-parties to autonomously inject watermarks into generated text, we develop a watermarking framework for black-box language model usage scenarios.

Adversarial Robustness Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.