У нас вы можете посмотреть бесплатно You have TWO YEARS LEFT to prepare - Dr. Roman Yampolskiy или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Dr. Roman Yampolskiy is one of the top thought leaders in AI safety and a Professor of Computer Science and Engineering. He coined the term “AI safety” in 2010 and has published some groundbreaking papers on the dangers of AI, Simulations and Alignment. He is also the author of books such as, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’. https://scholar.google.com/citations?... The latest AI News. Learn about LLMs, Gen AI and get ready for the rollout of AGI. Wes Roth covers the latest happenings in the world of OpenAI, Google, Anthropic, NVIDIA and Open Source AI. ______________________________________________ My Links 🔗 ➡️ Twitter: https://x.com/WesRothMoney ➡️ AI Newsletter: https://natural20.beehiiv.com/subscribe Want to work with me? Brand, sponsorship & business inquiries: wesroth@smoothmedia.co Check out my AI Podcast where me and Dylan interview AI experts: • AI POD - Wes Roth and Dylan Curious ______________________________________________ TIMELINE 00:00:00 Dr Roman Yampolski and AI Safety 00:02:45 what our future looks like 00:05:46 Mutually Assured Destruction 00:06:34 General vs Narrow Superintelligence 00:07:51 different AI architectures 00:08:27 does mechanistic interpretability solve AI alignment 00:11:35 instrumental convergence 00:13:17 is Superintelligence just scaling? 00:14:49 surprising AI abilities 00:17:10 truly horrifying AI outcomes 00:20:12 p(doom) 00:20:56 "boxing" Superintelligence in a simulation 00:23:38 are we in a simulation? 00:26:54 should Google control superintelligence? 00:32:38 how consciousness emerged 00:39:14 outlook 00:40:35 AI timelines 00:43:43 narrow vs general system 00:45:42 human bias 00:48:22 AI/human symbiosys 00:50:42 AI religion 00:52:58 evolution vs intelligent design 00:57:08 limit of intelligence 01:00:00 hacking our simulation 01:05:32 book recommendation 01:06:55 possitive AI scenario 01:08:42 daily stoic 01:11:05 organic bootloaders and aliens 01:13:42 how different audiences respond to AI safety 01:16:12 China vs US 01:20:04 robots #ai #openai #llm