• ClipSaver
ClipSaver
Русские видео
  • Смешные видео
  • Приколы
  • Обзоры
  • Новости
  • Тесты
  • Спорт
  • Любовь
  • Музыка
  • Разное
Сейчас в тренде
  • Фейгин лайф
  • Три кота
  • Самвел адамян
  • А4 ютуб
  • скачать бит
  • гитара с нуля
Иностранные видео
  • Funny Babies
  • Funny Sports
  • Funny Animals
  • Funny Pranks
  • Funny Magic
  • Funny Vines
  • Funny Virals
  • Funny K-Pop

We're Not Ready for Superintelligence скачать в хорошем качестве

We're Not Ready for Superintelligence 3 месяца назад

скачать видео

скачать mp3

скачать mp4

поделиться

телефон с камерой

телефон с видео

бесплатно

загрузить,

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
We're Not Ready for Superintelligence
  • Поделиться ВК
  • Поделиться в ОК
  •  
  •  


Скачать видео с ютуб по ссылке или смотреть без блокировок на сайте: We're Not Ready for Superintelligence в качестве 4k

У нас вы можете посмотреть бесплатно We're Not Ready for Superintelligence или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:

  • Информация по загрузке:

Скачать mp3 с ютуба отдельным файлом. Бесплатный рингтон We're Not Ready for Superintelligence в формате MP3:


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса ClipSaver.ru



We're Not Ready for Superintelligence

Our second video is now out!    • We're Not Ready for Superintelligence   AI 2027 depicts a possible future where artificial intelligence radically transforms the world in just a few intense years. It’s based on detailed expert forecasts — but how much of it will actually happen? Are we really racing towards a choice between a planet controlled by the elite, or one where humans have lost control entirely? My takeaway? Loss of control, racing scenarios, and concentration of power are all concerningly plausible, and among the most pressing issues the world faces. Check out the video and the resources below, judge the scenario for yourself, and let me know in the comments: how realistic is this? What are you still confused about? What makes you feel skeptical? What do you think we can actually do about this? Where to find me, Aric Floyd Subscribe to AI in Context to get up to speed and join the conversation about AI. There’s a lot to figure out, and we might have less time than you think. It’s time to jump in. You can also follow for skits and explainers on YouTube Shorts as well as: TikTok:   / ai_in_context   Instagram:   / ai_in_context   This video is a production of 80,000 Hours. Find us at https://80000hours.org and subscribe to our main YouTube channel here: ‪@eightythousandhours‬ What you can do next To read more about what you might be able to do to help, or get involved, check out: https://80000hours.org/agi/ You can also check out the 80,000 Hours job board at https://jobs.80000hours.org Or see what the authors of AI 2027 suggest doing next: https://blog.ai-futures.org/p/what-yo... Or take a 2-hour course on the Future of AI: https://bluedot.org/courses/future-of-ai You can tell your US or UK representatives you care about this issue in 60 seconds using this tool: https://controlai.com/take-action/ And if you just want some practical recommendations for how you and your family can get more prepared: https://benjamintodd.substack.com/p/h... Further reading and watching About AI 2027 Full report: https://ai-2027.com/ By Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean Update on their model: https://ai-2027.com/research/timeline... The lead author’s change in median forecast to 2028: https://x.com/DKokotajlo/status/19402... For more videos about AI risk, check out: Previous video about AI 2027:    • AI 2027: A Realistic Scenario of AI Takeover   Could AI wipe out humanity? | Most pressing problems:    • Could AI wipe out humanity? | Most pressin...   Intro to AI Safety by Rob Miles:    • Intro to AI Safety, Remastered   Me on Computerphile:    • AI Sandbagging - Computerphile   For more on what it means for an AI to “seek reward”, check out my short video:    • Important concepts in AI: Reward Hacking, ...   To read more about misalignment and AI risk: https://80000hours.org/problem-profil... To read more about why AGI by 2030 is plausible https://80000hours.org/agi/guide/when... Chapters 0:00 Introduction 1:15 The World in 2025 3:53 The Scenario Begins 6:07 Sidebar: Feedback Loops 7:21 China Wakes Up 10:11 Sidebar: Chain of Thought 10:52 Better-than-human Coders 11:46 Sidebar: Misalignment in the Real World 12:08 Agent-3 Deceives 15:18 Sidebar: How Misalignment Happens 17:53 The Choice 20:07 Ending A: The Race 24:08 Ending B: Slowdown 26:30 Zooming Out 29:04 The Implications 31:19 What Do We Do? 33:30 Conclusions and Resources Credits Directed and Produced by Phoebe Brooks: https://pbrooksfilms.com/ Written by Phoebe Brooks and Aric Floyd Editing, Graphics and Animation by Phoebe Brooks, Sam Watkins and Daniel Recinto: https://www.watkinsfilms.com/, http://behance.net/danielrecinto Executive Produced by Chana Messinger Production assistance from Charlotte Maxwell, Jack Worrall, David Erwood and Jake Morris With special thanks to Daniel Kokotajlo, Ryan Greenblatt, Nate Soares, Max Harms, Katja Grace, Mark Beall, Seán Ó Héigeartaigh and Eli Lifland And thanks to Bella Forristal, Arden ​​Koehler, Ailbhe Treacy, Rob Wiblin, Sean Riley, Siliconversations, Mathematicanese, Valerie Richmond, Daria Ivanova, Sloane Siegel, Brendan Hurst, Katy Moore, Mark DeVries, Ines Fernandez, Francesca Forristal, Rob Miles, Elizabeth Cox, Drew Spartz, Petr Lebedev, Mithuna Yoganathan, Conor Barnes

Comments

Контактный email для правообладателей: [email protected] © 2017 - 2025

Отказ от ответственности - Disclaimer Правообладателям - DMCA Условия использования сайта - TOS



Карта сайта 1 Карта сайта 2 Карта сайта 3 Карта сайта 4 Карта сайта 5