У нас вы можете посмотреть бесплатно How AI Is Stress-Testing the Publish or Perish System или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Dear QF Community, This is a special episode dedicated to a recent Nature piece (https://www.nature.com/articles/d4158...) highlighting a growing crisis in computer science publishing. The rapid rise of low-quality or fabricated AI-generated papers, often described as “AI slop”, is overwhelming preprint repositories and conference review systems. With LLM tools making it possible to generate plausible-looking research papers in minutes, the volume of submissions is rising sharply, while the ability of reviewers to properly verify work is collapsing. The article notes record-breaking conference submission numbers, rising rejection rates on arXiv, and increasing evidence of hallucinated citations and AI-generated content slipping into serious venues. In response, conferences and platforms are introducing stricter policies, financial disincentives, and AI-assisted screening, but the underlying structural pressure remains unresolved. From Quantum Formalism (QF) Academy’s perspective, this is not simply an AI problem. It is the collision of AI with a research culture that was already strained. The publish-or-perish model, particularly in computer science, is no longer viable in an era where paper production can be automated at scale. As long as career progression rewards volume over depth, researchers, especially early-career researchers under intense pressure, will remain incentivised to use the fastest available tools to churn outputs, whether or not those outputs reflect genuine intellectual contribution. The outcome is predictable. More noise, weaker peer review, and a slow erosion of trust. Even in mathematics, where depth is supposedly prized, young researchers are rarely encouraged to spend years chasing a genuinely difficult open problem. In many departments, doing so is quietly seen as risky, even reckless, because it does not reliably produce the steady stream of publishable results needed for hiring, tenure, and grant applications. A famous illustration of this dynamic is Andrew Wiles (https://en.wikipedia.org/wiki/Andrew_...) ’s work on Fermat’s Last Theorem. Wiles spent years working in near-complete secrecy, not because mathematics rewards secrecy, but because the incentives punish uncertainty. If you commit your prime research years to a single monumental problem and fail, you may end up with nothing “countable” to show for it. Wiles protected himself from that institutional risk by working privately until he was confident he had something real. This is precisely why the current system becomes unstable in the AI era. When output volume becomes easier than ever to generate, but career structures still reward quantity, the pressure does not disappear, it only grows. Unless the culture changes, the outcome is predictable, researchers will optimise for production rather than truth, and the system will continue drifting towards noise instead of knowledge. What is needed now is not an AI ban, but a redesign of research training. We need a culture of responsible AI use, where AI supports the research process without replacing the cognitive work that makes research meaningful. AI can assist with literature scanning, coding, writing clarity, and verification workflows, but it must not become a substitute for intellectual ownership, mathematical reasoning, or genuine scientific judgement. Special shout-out to our friend Andrew Akbashev ( / andrew-akbashev ) for his LinkedIn post (https://www.linkedin.com/posts/andrew...) , which first drew our attention to the Nature article. Wishing you a wonderful rest of the week. Quantum Formalism (QF) team Discover Our Second Focus Track: Mathematics of Topological Data Analysis This course explores the intersection of algebraic topology and computer science, focusing on algorithms that compute topological invariants from discrete data. It’s designed to build both conceptual understanding and practical insight into one of the most exciting mathematical frameworks in modern data science. Have a look at the syllabus and get started here: https://quantumformalism.academy/courses (https://quantumformalism.academy/cour...) This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit quantumformalism.substack.com/subscribe (https://quantumformalism.substack.com...)