У нас вы можете посмотреть бесплатно Yoshua Bengio: The AI Community Is Avoiding a Hard Conversation (w/ Andy Konwinski at NeurIPS 2025) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Yoshua Bengio, Turing Award winner, President and Scientific Director of LawZero, and Founder and Scientific Advisor at Mila reflects on the challenges facing frontier AI research: global inequality, concentration of power, and the need for epistemically honest systems. Subscribe for more in-depth conversations with the researchers and founders behind frontier AI. In this conversation: • Why publishing increasingly powerful models openly may become untenable • How concentration of frontier AI could reshape global power dynamics • The importance of safety, transparency, and “epistemic caution” • Research directions needed to ensure AI systems remain trustworthy Recorded at Laude Lounge @ NeurIPS 2025. More at laude.org. X: https://x.com/LaudeInstitute LinkedIn: / laude-institute Yoshua's X: https://x.com/Yoshua_Bengio Yoshua on LinkedIn: / yoshuabengio --- Hosted by Andy Konwinski Creative Producer - Mike Maley Production Manager - Lauren Lukow Videographer - Andrew James Benson Assistant Camera - Bradley Smith Senior Video Editor / Graphics - Cai Lee Editors - Jordan Calig, Juan Diego Parra Audio Editor - Carter Wogahn Produced by K. Tighe, Kayleigh Karutis, and Chris Rytting. Produced by Laude in partnership with Pod People. --- Chapters 0:00 — Why “Law Zero” Exists: Protecting Humanity First 1:09 — AI Power, Democracy, and Concentration of Control 2:05 — Is Human Intelligence the Ceiling? 3:15 — Quadrillion-Dollar Upside vs Existential Risk 4:16 — Global Coordination vs AI Arms Races 5:35 — Why Today’s AI Race Is Unlike Nuclear Weapons 6:36 — Alignment as a Technical Problem (Not Ethics) 7:24 — Deceptive Models, Lying, and Emergent Misalignment 8:22 — Epistemic Honesty and “Scientist AI” 9:48 — Hallucinations vs Intentional Deception 11:18 — AI Doing AI Research: Acceleration Risks 12:23 — Superintelligence as a Nonlinear Inflection Point 14:05 — Regulation, Externalities, and Liability Insurance 15:39 — Why Insurance Could Regulate AI Better Than Laws 18:30 — Open vs Closed Models: Where Openness Breaks 21:54 — Biotech, Cyber, and Knowledge Weapons Risks 24:48 — When Open Models Become Too Dangerous 27:33 — Who Decides AI Release Thresholds? 29:41 — Concentration of Power vs Global Fragmentation 33:38 — Few Powerful Models, Shared Global Control 38:34 — Why “Science Fiction” Framing Is Dangerous 40:06 — Rethinking AGI and Measuring Real Risk 42:00 — What Open Research Still Gets Right 43:25 — Final Thoughts: Safety Before the Point of No Return #artificialintelligence #opensource #airesearch #machinelearning #airesearch #ai #computerscience #superintelligence #airegulation #llms