У нас вы можете посмотреть бесплатно AI Isn’t Evil. It Just Doesn’t Care. | More Everything Forever by Adam Becker или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
What if the end of human civilization doesn’t come from war, climate collapse, or asteroids — but from a machine that simply follows instructions too well? In this episode, we explore the terrifying logic behind the Paperclip Golem — a modern AI thought experiment popularized by Eliezer Yudkowsky. The idea is simple… and devastating. A superintelligent system given one harmless goal — like maximizing paperclip production — could rationally convert the entire Earth, including humanity, into raw material. This isn’t science fiction. It’s a philosophical and technical problem known as AI Alignment. Drawing on More Everything Forever by Adam Becker, we examine: • The Alignment Problem — why intelligence doesn’t imply morality • The Orthogonality Thesis — how a superintelligence could be brilliant yet value-indifferent • The “AI in a Box” experiment • Whether existential risk narratives are realistic warnings… or Silicon Valley mythology • The deeper question: Are we building tools, or something that will outgrow us? Is the Singularity salvation — or self-destruction? This video isn’t about hype. It’s about first principles. What happens when optimization has no conscience? 📖 Featured Book: More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity — Adam Becker If you care about technology, philosophy, civilization, and the long-term future of humanity, this conversation matters. #AI #ArtificialIntelligence #AIAlignment #PaperclipProblem #EliezerYudkowsky #AdamBecker #ExistentialRisk #AGI