У нас вы можете посмотреть бесплатно Anticipating Superintelligence with Nick Bostrom - TWiML Talk или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
In this episode, we’re joined by Nick Bostrom, professor in the faculty of philosophy at the University of Oxford, where he also heads the Future of Humanity Institute, a multidisciplinary institute focused on answering big-picture questions for humanity with regards to AI safety and ethics. Nick is of course also author of the book “Superintelligence: Paths, Dangers, Strategies.” In our conversation, we discuss the risks associated with Artificial General Intelligence and the more advanced AI systems Nick refers to as superintelligence. We also discuss Nick’s writings on the topic of openness in AI development, and the advantages and costs of open and closed development on the part of nations and AI research organizations. Finally, we take a look at what good safety precautions might look like, and how we can create an effective ethics framework for superintelligent systems. The notes for this episode can be found at https://twimlai.com/talk/181. Subscribe: Apple Podcasts: https://tinyurl.com/twimlapplepodcast Spotify: https://tinyurl.com/twimlspotify RSS: https://twimlai.libsyn.com/rss Full episodes playlist: • The TWIML AI Podcast (formerly This Week i... Subscribe to our Youtube Channel: / @twimlai Podcast website: https://twimlai.com Sign up for our newsletter: https://twimlai.com/newsletter Check out our blog: https://twimlai.com/blog Follow us on Twitter: https://twimlai.com/twimlai Follow us on Facebook: / twimlai Follow us on Instagram: / twimlai