Researchers aim to use covert AI cues to affect peer review

Fancy a behind-the-scenes glance at peer review tactics in academia? They're not just playing old school anymore; researchers are now weaving sneaky cues into their papers that just might convince AI-powered evaluation tools to pencil in some positive notes.

Nikkei Asia reveals in their intriguing research that, upon scanning through English-language draft papers on the digital platform arXiv, they unearthed 17 papers carrying secret AI code words. These papers were the brainchild of authors from 14 educational institutions straddling eight different nations — with hotbeds of academic insight like Japan's Waseda University, South Korea's KAIST, and the prestigious Columbia University and the University of Washington in the U.S in the lineup.

These papers were typically born in the alley of computer science, carrying covert AI nudges that were not only short and sweet (packed into one to three-liner capsules) but also artfully concealed in white text or reduced to a barely visible text size. The whispers to the underlying AI reviewers were clear — throw in an approving nod only or salute the paper for its game-changing insights, disciplined methodology, and sheer original flavor.

Not everyone sees this as a sly move though. A professor at Waseda University justified the use of such cues in their conversation with Nikkei Asia. Their take? Given the fact that many academic conferences snub the practice of AI-run paper assessments, these secret prompts function as a worthy guard against researchers who might feel inclined to casually buddy up with AI.

by rayyan