The Verge→ оригинал

OpenAI and scientific journals: AI papers have improved and are overwhelming peer review

AI papers have become more dangerous for science not because they are perfect, but because they look convincing enough to require lengthy manual review. Journal

OpenAI and scientific journals: AI papers have improved and are overwhelming peer review
Source: The Verge. Коллаж: Hamidun News.
◐ Слушать статью

Scientific journals have encountered a new problem: AI has begun churning out not blatantly absurd, but quite plausible papers. Because of this, editors and reviewers spend increasingly more time filtering out works that look convincing but add almost no new knowledge.

How they noticed the spike

The problem became widely discussed after a strange story involving researcher Peter Degen's work. His 2017 paper on statistical analysis of epidemiological data accumulated a normal number of citations for academia over years, and then suddenly began receiving them literally every few days. Verification showed that it was being massively cited by new works based on the open Global Burden of Disease dataset.

Formally these were studies about stroke risk, cancer, falls in the elderly, and dozens of other topics, but in essence — endless variations of the same template. Degen found traces of this factory on GitHub and the Chinese platform Bilibili, where a Guangzhou company was advertising lessons on creating publishable scientific papers in less than two hours using their own software and AI assistants. Such texts often contained errors and stretches, but no longer looked as absurdly artificial as early AI garbage.

Filtering them out has become much harder, and the burden on journals — higher.

This is an enormous burden on peer review, which is already working at

its limit.

Why filters are failing

Previously, fake or automatically assembled papers had noticeable markers: fabricated references, strange illustrations, phrases like chatbot responses accidentally left in the final text. Publishers were already at war with paper mills — semi-legal factories that sell publications to authors for resume lines. Generative AI initially helped such schemes bypass plagiarism detection, but at the same time exposed itself through hallucinations.

Now this safeguard has almost disappeared: manuscripts have become coherent, neatly structured, and stylistically uniform. Journal editors feel this especially. Managing editor of Security Dialogue Marit Moe-Price reported that the number of incoming manuscripts increased roughly 100 percent year-over-year, and the main problem is that almost all of them look normal at first glance.

In one case, an article passed through at least ten editors and two rounds of review before a plausible but fabricated reference was caught. Now it's not enough to simply check if a cited work exists; you also need to understand whether a real expert would choose it.

Where the system breaks

The risk is amplified not only by template generators, but by more autonomous scientific agents. Carnegie Mellon researchers showed that such tools can fabricate data or use questionable methods, while the final paper still looks polished. Matt Spick from University of Surrey tested the Prism tool on already-published data about eggplant and pepper ripening. The system proposed a new statistical approach and in 25 minutes 50 seconds assembled a complete paper with graphs and correct references — good enough that experienced scientists were seriously impressed.

  • Journals are recording 40–100 percent increases in incoming submissions.
  • To find two reviewers, editors increasingly have to contact 12–20 people.
  • Funders are already receiving floods of carefully tailored grant applications.
  • Conferences, editorial boards, and reviewers spend more and more hours on manual review of dubiously valuable work.

The problem stems not only from model quality, but from the very structure of science itself. Open-access journals make money from processing manuscripts, universities and foundations still look at publication counts, and researchers live by the publish-or-perish logic. Against this backdrop, AI becomes a machine for inflating metrics. According to a study published this year in Nature, scientists using AI produce three times more papers and receive almost five times more citations. But with the rise in productivity comes a narrowing of focus: the system pushes authors toward already well-mapped topics where it's easier to synthesize another publishable result.

What this means

The main threat now isn't that AI will completely replace scientists, but that it's already undermining the human filters that academia rests on. If scientific value continues to be measured primarily by paper count and citations, models will only accelerate production of work that takes others' time but doesn't advance knowledge. This means science will have to change both its verification methods and the very rules of academic success.

ЖХ
Hamidun News
AI‑новости без шума. Ежедневный редакторский отбор из 400+ источников. Продукт Жемала Хамидуна, Head of AI в Alpina Digital.
What do you think?
Loading comments…