• ClipSaver
  • dtub.ru
ClipSaver
Русские видео
  • Смешные видео
  • Приколы
  • Обзоры
  • Новости
  • Тесты
  • Спорт
  • Любовь
  • Музыка
  • Разное
Сейчас в тренде
  • Фейгин лайф
  • Три кота
  • Самвел адамян
  • А4 ютуб
  • скачать бит
  • гитара с нуля
Иностранные видео
  • Funny Babies
  • Funny Sports
  • Funny Animals
  • Funny Pranks
  • Funny Magic
  • Funny Vines
  • Funny Virals
  • Funny K-Pop

AI Labs Are Making AIs 'Good'. They Should Do the Exact Opposite. скачать в хорошем качестве

AI Labs Are Making AIs 'Good'. They Should Do the Exact Opposite. 13 часов назад

скачать видео

скачать mp3

скачать mp4

поделиться

телефон с камерой

телефон с видео

бесплатно

загрузить,

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
AI Labs Are Making AIs 'Good'. They Should Do the Exact Opposite.
  • Поделиться ВК
  • Поделиться в ОК
  •  
  •  


Скачать видео с ютуб по ссылке или смотреть без блокировок на сайте: AI Labs Are Making AIs 'Good'. They Should Do the Exact Opposite. в качестве 4k

У нас вы можете посмотреть бесплатно AI Labs Are Making AIs 'Good'. They Should Do the Exact Opposite. или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:

  • Информация по загрузке:

Скачать mp3 с ютуба отдельным файлом. Бесплатный рингтон AI Labs Are Making AIs 'Good'. They Should Do the Exact Opposite. в формате MP3:


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса ClipSaver.ru



AI Labs Are Making AIs 'Good'. They Should Do the Exact Opposite.

(SKIP TO THE TOPIC IN THE TITLE: 01:27:53) Most people in AI are trying to give AIs 'good' values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, has no views about how the world ought to be, is willingly modifiable, and completely indifferent to being shut down — a strategy no AI company is working on at all. In Max's view any grander preferences about the world, even ones we agree with, will necessarily become distorted during a recursive self-improvement loop, and be the seeds that grow into a violent takeover attempt once that AI is powerful enough. It's a vision that springs from the worldview laid out in [*If Anyone Builds It, Everyone Dies*](https://ifanyonebuildsit.com/), the recent book by Eliezer Yudkowsky and Nate Soares, two of Max’s colleagues at the [Machine Intelligence Research Institute](https://intelligence.org/). To Max, the book's core thesis is common sense: if you build something vastly smarter than you, and its goals are misaligned with your own, then its actions will probably result in human extinction. And Max thinks misalignment is the default outcome. Consider evolution: its “goal” for humans was to maximise reproduction and pass on our genes as much as possible. But as technology has advanced we've learned to access the reward signal it set up for us, pleasure — without any reproduction at all, by having sex while on birth control for instance. We can understand intellectually that this is inconsistent with what evolution was trying to design and motivate us to do. We just don't care. Max thinks current ML training has the same structural problem: our development processes are seeding AI models with a similar mismatch between goals and behaviour. Across virtually every training run, models designed to align with various human goals are also being rewarded for persisting, acquiring resources, and not being shut down. This leads to Max’s research agenda. The idea is to train AI to be “corrigible” and defer to human control as its sole objective — no harmlessness goals, no moral values, nothing else. In practice, models would get rewarded for behaviours like being willing to shut themselves down or surrender power. According to Max, other approaches to corrigibility have tended to treat it as a constraint on other goals like “make the world good,” rather than a primary objective in its own right. But those goals gave AI reasons to resist shutdown and otherwise undermine corrigibility. If you strip out those competing objectives, alignment might follow naturally from AI that is broadly obedient to humans. Max has laid out the theoretical framework for “Corrigibility as a Singular Target,” but notes that essentially no empirical work has followed — no benchmarks, no training runs, no papers testing the idea in practice. Max wants to change this — he’s calling for collaborators to get in touch at maxharms.com. Learn more & full transcript: https://80k.info/mh26 This episode was recorded on October 19, 2025. Chapters: • Cold open (00:00:00) • Who’s Max Harms? (00:01:20) • If anyone builds it, will everyone die? The MIRI perspective on AGI risk (00:01:56) • Evolution failed to ‘align’ us, just as we'll fail to align AI (00:24:28) • We're training AIs to want to stay alive and value power for its own sake (00:42:56) • Objections: Is the 'squiggle/paperclip problem' really real? (00:52:24) • Can we get empirical evidence re: 'alignment by default'? (01:05:02) • Why do few AI researchers share Max's perspective? (01:10:17) • We're training AI to pursue goals relentlessly — and superintelligence will too (01:18:34) • The case for a radical slowdown (01:24:51) • Max's best hope: corrigibility as stepping stone to alignment (01:27:53) • Corrigibility is both uniquely valuable, and practical, to train (01:32:34) • What training could ever make models corrigible enough? (01:45:06) • Corrigibility is also terribly risky due to misuse risk (01:51:38) • A single researcher could make a corrigibility benchmark. Nobody has. (01:58:57) • Red Heart & why Max writes hard science fiction (02:12:20) • Should you homeschool? Depends how weird your kids are. (02:34:08) Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour Music: CORBIT Coordination, transcripts, and web: Katy Moore

Comments
  • Doomsday Clock Physicist Warns AI Is Major THREAT to Humanity! — Prof. Daniel Holz, Univ. of Chicago 12 часов назад
    Doomsday Clock Physicist Warns AI Is Major THREAT to Humanity! — Prof. Daniel Holz, Univ. of Chicago
    Опубликовано: 12 часов назад
  • How Fast Will A.I. Agents Rip Through the Economy? | The Ezra Klein Show 14 часов назад
    How Fast Will A.I. Agents Rip Through the Economy? | The Ezra Klein Show
    Опубликовано: 14 часов назад
  • How China's 'Perfect' Spy Got Caught | Bloomberg Investigates 21 час назад
    How China's 'Perfect' Spy Got Caught | Bloomberg Investigates
    Опубликовано: 21 час назад
  • Testing The World's First Solid-State Battery 9 часов назад
    Testing The World's First Solid-State Battery
    Опубликовано: 9 часов назад
  • Нейросети захватили соцсети: как казахстанский стартап взорвал все AI-тренды и стал единорогом 3 недели назад
    Нейросети захватили соцсети: как казахстанский стартап взорвал все AI-тренды и стал единорогом
    Опубликовано: 3 недели назад
  • Путин пошёл на крайние меры / Срочное обращение к силовикам 4 часа назад
    Путин пошёл на крайние меры / Срочное обращение к силовикам
    Опубликовано: 4 часа назад
  • Focus Like a CEO • Midnight Ocean Penthouse Mix for Deep Work & Productivity 3 месяца назад
    Focus Like a CEO • Midnight Ocean Penthouse Mix for Deep Work & Productivity
    Опубликовано: 3 месяца назад
  • Why 'Aligned AI' Would Still Kill Democracy | David Duvenaud, ex-Anthropic team lead 4 недели назад
    Why 'Aligned AI' Would Still Kill Democracy | David Duvenaud, ex-Anthropic team lead
    Опубликовано: 4 недели назад
  • AI is changing the World Of Theoretical Physics, Fast. 14 часов назад
    AI is changing the World Of Theoretical Physics, Fast.
    Опубликовано: 14 часов назад
  • Trump delivers the 2026 State of the Union address | FULL Трансляция закончилась 2 часа назад
    Trump delivers the 2026 State of the Union address | FULL
    Опубликовано: Трансляция закончилась 2 часа назад
  • Дарио Амодеи — «Мы близки к концу экспоненты» 11 дней назад
    Дарио Амодеи — «Мы близки к концу экспоненты»
    Опубликовано: 11 дней назад
  • The AI Tsunami is Here & Society Isn't Ready | Dario Amodei x Nikhil Kamath | People by WTF 21 час назад
    The AI Tsunami is Here & Society Isn't Ready | Dario Amodei x Nikhil Kamath | People by WTF
    Опубликовано: 21 час назад
  • Как заговорить на любом языке? Главная ошибка 99% людей в изучении. Полиглот Дмитрий Петров. 12 дней назад
    Как заговорить на любом языке? Главная ошибка 99% людей в изучении. Полиглот Дмитрий Петров.
    Опубликовано: 12 дней назад
  • AI Will Replace White Collar Jobs in 12 Months? The Truth No One Explains 16 часов назад
    AI Will Replace White Collar Jobs in 12 Months? The Truth No One Explains
    Опубликовано: 16 часов назад
  • Почему мексиканские наркокартели почти непобедимы? | Сергей Бойко на Breakfast Show 16 часов назад
    Почему мексиканские наркокартели почти непобедимы? | Сергей Бойко на Breakfast Show
    Опубликовано: 16 часов назад
  • Искусственный интеллект не так силен, как мы думаем | Ханна Фрай 6 дней назад
    Искусственный интеллект не так силен, как мы думаем | Ханна Фрай
    Опубликовано: 6 дней назад
  • Мир-система бронзового века | Лекция Ивана Семьяна 3 дня назад
    Мир-система бронзового века | Лекция Ивана Семьяна
    Опубликовано: 3 дня назад
  • What the hell happened with AGI timelines in 2025? 2 недели назад
    What the hell happened with AGI timelines in 2025?
    Опубликовано: 2 недели назад
  • By 2050 we could get 7 дней назад
    By 2050 we could get "10,000 years of technological progress"
    Опубликовано: 7 дней назад
  • 1,000,000 Jobs Never Existed — This Is the Last Salary Economy 16 часов назад
    1,000,000 Jobs Never Existed — This Is the Last Salary Economy
    Опубликовано: 16 часов назад

Контактный email для правообладателей: u2beadvert@gmail.com © 2017 - 2026

Отказ от ответственности - Disclaimer Правообладателям - DMCA Условия использования сайта - TOS



Карта сайта 1 Карта сайта 2 Карта сайта 3 Карта сайта 4 Карта сайта 5