• ClipSaver
  • dtub.ru
ClipSaver
Русские видео
  • Смешные видео
  • Приколы
  • Обзоры
  • Новости
  • Тесты
  • Спорт
  • Любовь
  • Музыка
  • Разное
Сейчас в тренде
  • Фейгин лайф
  • Три кота
  • Самвел адамян
  • А4 ютуб
  • скачать бит
  • гитара с нуля
Иностранные видео
  • Funny Babies
  • Funny Sports
  • Funny Animals
  • Funny Pranks
  • Funny Magic
  • Funny Vines
  • Funny Virals
  • Funny K-Pop

I'm Sick of the ASI Fear-Mongering (Hank Green's Video Made Me Rage, Featuring Nate Soares) скачать в хорошем качестве

I'm Sick of the ASI Fear-Mongering (Hank Green's Video Made Me Rage, Featuring Nate Soares) 2 месяца назад

скачать видео

скачать mp3

скачать mp4

поделиться

телефон с камерой

телефон с видео

бесплатно

загрузить,

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
I'm Sick of the ASI Fear-Mongering (Hank Green's Video Made Me Rage, Featuring Nate Soares)
  • Поделиться ВК
  • Поделиться в ОК
  •  
  •  


Скачать видео с ютуб по ссылке или смотреть без блокировок на сайте: I'm Sick of the ASI Fear-Mongering (Hank Green's Video Made Me Rage, Featuring Nate Soares) в качестве 4k

У нас вы можете посмотреть бесплатно I'm Sick of the ASI Fear-Mongering (Hank Green's Video Made Me Rage, Featuring Nate Soares) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:

  • Информация по загрузке:

Скачать mp3 с ютуба отдельным файлом. Бесплатный рингтон I'm Sick of the ASI Fear-Mongering (Hank Green's Video Made Me Rage, Featuring Nate Soares) в формате MP3:


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса ClipSaver.ru



I'm Sick of the ASI Fear-Mongering (Hank Green's Video Made Me Rage, Featuring Nate Soares)

I watched a video with Hank Green and Nate Soares discussing the existential threat of Artificial Super Intelligence (ASI), and honestly, I was infuriated. Soares, co-author of If Anyone Builds It, Everyone Dies, argues that if an ASI is not perfectly "aligned" with human values, it will surely lead to our doom. But as I break down their major points—from their vague, fear-inducing definition of ASI to their outlandish claims about current AI capabilities—it seems like the real problem isn't a malicious super-intelligence, but rather the unnecessary mystery and power concentration surrounding the technology. This video is my detailed objection to the "stop all AI" argument. I challenge the notion that LLMs are plotting, self-aware, or "caring" in a way that poses an existential threat. The metaphors they use, like comparing AI development to alchemy, only serve to obscure the technology and increase the power of the builders (the "alchemists"). Ultimately, the true "alignment problem" isn't with an uncaring algorithm, but with the huge corporations and influential figures who are shaping this powerful technology. I propose four foundational principles for thinking about AI that prioritize clarity, human flourishing, and a healthy distrust of the powerful. Resources: -- Hank's original video:    • ChatGPT isn't Smart. It's something Much W...   -- The Anthropic Paper on self-awareness: https://assets.anthropic.com/m/12f214... (page 58) Here is a chunk of my speaking notes, to help you orient yourself: Caveats: The run down and my problems Nate defines AI as: "smarter/better than the average human at any mental task" (8:00 mark) ** The definition is super vague. *** Does ASI have to be doing the things mentally? Or just functionally equivalent to the mental tasks? *** Does it have to be better at any single task? Two? Three? How many? Bold take: the vagueness is the point ** They disparage philosophy several times. And I hate that, but it's weird that they do it almost as a crutch to point out that there's a gaping hole in their worldview that they can't fill in. First, Nate suggests that some LLMs are using proto-reasoning 11:38 -    • ChatGPT isn't Smart. It's something Much W...   *** This isn't something we can just gloss over. If they aren't thinking, then they aren't ASI. And if they're not ASI, then the whole thing crumbles. ** They make outlandish claims about how capable AI is. *** It loads the dice saying that they can lie. *** This takes for granted that the truth part is more or less easy, which is nonsense. ** They contradict themselves about whether LLM words mean anything at all *** They think the words give us insight into its thinking. *** Then they say we can't trust AI because they are using words differently. 13:05    • ChatGPT isn't Smart. It's something Much W...   "[nate]Sometimes your human intuitions for what these pieces of reasoning mean aren't how the AI is using those words." [hank] "That freaks me out. I'm just saying that freaks me out." 15:08    • ChatGPT isn't Smart. It's something Much W...   Third, Hank and Nate are talking about how good an AI can be at understanding creatures. 19:40    • ChatGPT isn't Smart. It's something Much W...   But I found this very confusing. So what? ** What are LLMs doing? He does a great job of describing how LLM's work. (28:00 mark) ** They just throw out there that AI is self-aware. 13:58    • ChatGPT isn't Smart. It's something Much W...   We can go straight to the source material here: https://assets.anthropic.com/m/12f214... (page 58) *** The alignment problem--the big one. They describe the AI as wanting/caring about things that don't align with our well-being and interests. **** AI DOESN'T CARE! Or at least, you have to make the case that it does. **** Alignment is the issue, but it's not me being misaligned with Claude and ChatGPT and Grok, it's me being misaligned with Dario Amodei and Sam Altman and Elon Musk. ** Eliezer Yudkowski This gets tied up in a bunch of things, but it starts with Harry Potter rationalist fan fiction. It gets into effective altruism and Roko's Basilisk. It gets really weird. ** Decision Theory Run Amok 1. ASI is at least a little bit likely. 2. If ASI comes about, there's an x% chance everybody dies. 3. If x is greater than some low number, we should stop AI development. 4. So, we should stop AI development. You HAVE to get the first premise to a plausible ** Some metaphors they use a lot: *** AI is grown, not coded. It's like an alien biology. *** Doritos/sucralose/junk food (cigarettes). *** Alchemy1 Foundational Views: ** Ecclesiastes: Nothing new under the sun. ** Technology isn't magic. Try to understand stuff, and it's fine to admit when you don't. ** Distrust the large/powerful/influential people and companies. ** Be kind to others. Promote human flourishing. To Hank Green

Comments
  • You’re Ignoring 70% of the AI Tools That Will Transform Your Life 5 дней назад
    You’re Ignoring 70% of the AI Tools That Will Transform Your Life
    Опубликовано: 5 дней назад
  • AI 2027: A Realistic Scenario of AI Takeover 7 месяцев назад
    AI 2027: A Realistic Scenario of AI Takeover
    Опубликовано: 7 месяцев назад
  • Действительно ли ИИ убьёт нас всех? с Элиезером Юдковски и Нейтом Соаресом 2 месяца назад
    Действительно ли ИИ убьёт нас всех? с Элиезером Юдковски и Нейтом Соаресом
    Опубликовано: 2 месяца назад
  • We need to talk about the Hank Green AI Video… 4 месяца назад
    We need to talk about the Hank Green AI Video…
    Опубликовано: 4 месяца назад
  • The Monster Within – A 15th Fear and How To Miss It 1 месяц назад
    The Monster Within – A 15th Fear and How To Miss It
    Опубликовано: 1 месяц назад
  • The Case Against Superintelligence | Cal Newport 2 месяца назад
    The Case Against Superintelligence | Cal Newport
    Опубликовано: 2 месяца назад
  • Конец OpenAI и смерть кодинга: прогноз на 2026 3 недели назад
    Конец OpenAI и смерть кодинга: прогноз на 2026
    Опубликовано: 3 недели назад
  • ОБЫЧНЫЙ VPN УМЕР: Чем обходить блокировки в 2026 4 дня назад
    ОБЫЧНЫЙ VPN УМЕР: Чем обходить блокировки в 2026
    Опубликовано: 4 дня назад
  • Writing Doom – Award-Winning Short Film on Superintelligence (2024) 1 год назад
    Writing Doom – Award-Winning Short Film on Superintelligence (2024)
    Опубликовано: 1 год назад
  • Действительно ли магистры права понимают мир? (Спойлер: Нет.) 1 месяц назад
    Действительно ли магистры права понимают мир? (Спойлер: Нет.)
    Опубликовано: 1 месяц назад
  • Trump Isn’t Joking About Greenland — Here’s How China and Russia Factor In 2 дня назад
    Trump Isn’t Joking About Greenland — Here’s How China and Russia Factor In
    Опубликовано: 2 дня назад
  • Nate Soares on Why AI Could Kill Us All 1 месяц назад
    Nate Soares on Why AI Could Kill Us All
    Опубликовано: 1 месяц назад
  • No, AI is not Sentient (It’s just more Capitalism) 5 месяцев назад
    No, AI is not Sentient (It’s just more Capitalism)
    Опубликовано: 5 месяцев назад
  • Если кто-то это построит, все умрут? 2 месяца назад
    Если кто-то это построит, все умрут?
    Опубликовано: 2 месяца назад
  • The Singularity Countdown: AGI by 2029, Humans Merge with AI, Intelligence 1000x | Ray Kurzweil 1 день назад
    The Singularity Countdown: AGI by 2029, Humans Merge with AI, Intelligence 1000x | Ray Kurzweil
    Опубликовано: 1 день назад
  • Palantir убивает людей? Но кто на самом деле нажимает на кнопки? 4 часа назад
    Palantir убивает людей? Но кто на самом деле нажимает на кнопки?
    Опубликовано: 4 часа назад
  • ВОЙНА ИЗ ПОСЛЕДНИХ СИЛ. БЕСЕДА С ИГОРЕМ ЛИПСИЦЕМ @IgorLipsits_1950 Трансляция закончилась 1 час назад
    ВОЙНА ИЗ ПОСЛЕДНИХ СИЛ. БЕСЕДА С ИГОРЕМ ЛИПСИЦЕМ @IgorLipsits_1950
    Опубликовано: Трансляция закончилась 1 час назад
  • Nate Soares on AI, Humanity, and the Future of Intelligence | Off Limits Podcast 2 месяца назад
    Nate Soares on AI, Humanity, and the Future of Intelligence | Off Limits Podcast
    Опубликовано: 2 месяца назад
  • Компания Salesforce признала свою ошибку. 2 дня назад
    Компания Salesforce признала свою ошибку.
    Опубликовано: 2 дня назад
  • The REAL Reason AI Can’t Be Stopped Now 2 часа назад
    The REAL Reason AI Can’t Be Stopped Now
    Опубликовано: 2 часа назад

Контактный email для правообладателей: u2beadvert@gmail.com © 2017 - 2026

Отказ от ответственности - Disclaimer Правообладателям - DMCA Условия использования сайта - TOS



Карта сайта 1 Карта сайта 2 Карта сайта 3 Карта сайта 4 Карта сайта 5