Abstract
Peer review serves as the primary “gatekeeping” mechanism in science communication, filtering validated knowledge from noise before it reaches the public sphere. This commentary argues that integrating Generative AI into this process acts as a structural disruptor, threatening to erode the “seal of quality” that underpins public trust in science. By analyzing the systemic feedback loop between AI-generated submissions (“supply-side flood”) and AI-assisted reviews (“demand-side automation”), we demonstrate how the ecosystem risks accelerating the “degradation of work” and creating “monocultures of knowing.” Using strategic foresight, we outline four scenarios for the future of scientific publishing, ranging from a collapse of trust to algorithmic governance. We conclude that to avoid a crisis of credibility, the community must pivot from “Algorithmic Aggregation” to “Contextual Stewardship.” We propose that human reviewers remain indispensable not for processing speed but for four uniquely human competencies: verifying ground truth, arbitrating ethics, communicating uncertainty, and curating the paradigm-shifting anomalies that automated systems reject as noise but are actually the driving force behind the progress.
Keywords
Get full access to this article
View all access options for this article.
