Scholars sneaking phrases into papers to fool AI reviewers
Using prompt injections to play a Jedi mind trick on LLMs
A handful of international computer science researchers appear to be
trying to influence AI reviews with a new class of prompt injection
attack.
Nikkei Asia has found that research papers from at least 14 different
academic institutions in eight countries contain hidden text that
instructs any AI model summarizing the work to focus on flattering
comments.
https://www.theregister.com/2025/07/07/scholars_try_to_fool_llm_reviewers
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)