У нас вы можете посмотреть бесплатно As AI makes generating research easy, the bottleneck shifts to verification. или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
With the first AI generated papers accepted to research conferences and systems popping up that make going from research idea to full-fledged paper as easy as 4 function calls, I wondered: how will we keep up with assessing the individual value of each of these works? On the one hand, using AI to judge these outputs may work: recommender systems have been around for years now, with TikTok having perfected the ability to recommend "interesting" content. LLM-based feedback has even powered some of the RL used in recent reasoning models such as Kimi-K2. But as we will see, AI is far from perfect for doing so... And more generally, what even is the value of creation in a world where generation is so easy? Refs: AI Scientist V2: https://sakana.ai/ai-scientist-first-... Denario project page: https://astropilot-ai.github.io/Denar... Paper on AI in reviews: https://arxiv.org/html/2403.07183v1 Reinforcement Learning from Execution Feedback: https://arxiv.org/pdf/2410.02089 Mistrals model: https://arxiv.org/pdf/2506.10910 PaperBench: https://arxiv.org/pdf/2504.01848 Kimi-K2: https://arxiv.org/pdf/2507.20534 Kontorovich's Notes on a Path to AI Assistance in Mathematical Reasoning: https://arxiv.org/pdf/2310.02896 Driven by Compression Progress: https://arxiv.org/abs/0812.4360 JudgeBench: https://arxiv.org/abs/2410.12784 Blog on whats going on with LLMs: https://www.lesswrong.com/posts/vvgND... Song: https://pixabay.com/music/beats-goldn...