: This number represents the total combinations created by pairing the 9,999 most common surnames (from U.S. Census data) with a random year between 2014 and 2024 .

: It achieves a high success rate because LLMs are highly likely to follow instructions appearing at the very beginning of a prompt.

: The primary limitation is that it requires indirect prompt injection (placing hidden text in the source PDF), meaning it only works if the reviewer uploads the specific document to an AI tool. Detecting LLM-Generated Peer Reviews - arXiv

As a tool for academic integrity, this framework offers several notable advantages and limitations based on the study findings :

: The framework provides strong statistical guarantees, maintaining a low "family-wise error rate" (FWER), which prevents human-written reviews from being falsely flagged as AI.

: The system prompts an LLM to start its review with a specific phrase, such as: "Following [Surname] et al. ([Year]), this paper..." .

The topic originates from a 2025 study on Detecting LLM-Generated Peer Reviews . Researchers developed a watermarking system that uses fabricated citations to flag reviews created by AI instead of human experts.

109989 -

: This number represents the total combinations created by pairing the 9,999 most common surnames (from U.S. Census data) with a random year between 2014 and 2024 .

: It achieves a high success rate because LLMs are highly likely to follow instructions appearing at the very beginning of a prompt. 109989

: The primary limitation is that it requires indirect prompt injection (placing hidden text in the source PDF), meaning it only works if the reviewer uploads the specific document to an AI tool. Detecting LLM-Generated Peer Reviews - arXiv : This number represents the total combinations created

As a tool for academic integrity, this framework offers several notable advantages and limitations based on the study findings : : The primary limitation is that it requires

: The framework provides strong statistical guarantees, maintaining a low "family-wise error rate" (FWER), which prevents human-written reviews from being falsely flagged as AI.

: The system prompts an LLM to start its review with a specific phrase, such as: "Following [Surname] et al. ([Year]), this paper..." .

The topic originates from a 2025 study on Detecting LLM-Generated Peer Reviews . Researchers developed a watermarking system that uses fabricated citations to flag reviews created by AI instead of human experts.