У нас вы можете посмотреть бесплатно Anders Sandberg: Scary Futures Tier List - Halloween Special или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Halloween special: a scary futures tier list that is spooky in theme & sobering in content. This tier list isn't scientific, it isn't the final say, it doesn't exhaustively include all doomsday risks - its a bit of a gimmick, and a fun intuition pump. Anders Sandberg is a neuroscientist and futurist well known for sizing up the biggest canvases we’ve got. Formerly a senior research fellow at Oxford’s Future of Humanity Institute, he’s worked on AI, cognitive enhancement, existential risk, and those deliciously unsettling Fermi-paradox puzzles. His forthcoming books include “Law, Liberty and Leviathan: human autonomy in the era of existential risk and Artificial Intelligence”, and a big one - “Grand Futures—a tour of what’s physically possible for advanced civilisations”. He authored classic papers like "Daily Life Among the Jupiter Brains" (which came out in 1999), and co-authored “Eternity in Six Hours” on intergalactic expansion, and “Dissolving the Fermi Paradox.” 0:00 Intro 1:33 Why a tier list of scary futures? 3:32 Doom by natural causes - everyone dies by natural causes (all at once) 5:03 Doom by asphyxiation - everyone suffocates (all at once) 6:08 Reasoning about super-unlikely but super-high impact scenarios - Probing the Improbable [1] 7:21 Death by LHC (Large Hadron Collider) - particle physics risks 10:01 Dark Fire 15:30 Vacuum Decay - bubbles of nothing 18:34 How Unlikely is a Doomsday? [2] 21:01 AI Doom via Perverse Instantiation / Predictable Clickers 23:45 AI Doom - Death by (Right) Metaethics (also death by wrong metaethics) 27:48 AI Doom - Sleepwalking into oblivion 31:03 Meditations on Moloch - multipolar traps 33:12 Mindless outsources 35:25 Enfeeblement, Lack of Autonomy - Serfdom conclusion 42:06 Perverse Instantiation of Proxy Values (and orthogonality thesis + goal content integrity) 43:32 Human alignment to the AI (where AI is optimising for something weird, not optimising for objectively good values) 45:36 AI: More Moral Than Us (related to Death by Metaethics) [3] 48:01 Higher Value Distraction & Value Lock-In via Avoidance of Higher Values [4] - Also discussed Indirect Normativity [5] 53:55 Value Lock-in via Totalitarianism 56:40 Rational convergence - AI, for instrumental reasons aligns to what it predicts the cosmic collective wants (cooperative values assuming offence/defence scaling favours defence) [6] 1:00:37 Cooperation through regular interactions and trade 1:02:18 Cosmic Cooperation Breakdown 1:03:45 Simulation Shutdown 1:04:52 Sycophantic AI makes us like it / Discomfort avoidance 1:06:50 Hacking Humans: YGBM tech (You've Gotta Believe Me) - Automation of Radical Persuasion - Soft Capture of Values 1:10:51 DYI wetlabs - backyard biohacking leading to bioterrorism 1:12:18 Geoengineering whiplash leading to cascade failure 1:14:12 Risk avoidance - Kindness trap - avoidance of suffering leads to risk aversion, overly precaution 1:16:19 Big Rip 1:16:50 Wireheading - goal gaming 1:18:19 What might Superintelligence find scary? 'There is always something darker' [1] Paper: 'Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes' by Toby Ord, Rafaela Hillerbrand, Anders Sandberg - https://arxiv.org/abs/0810.5515 [2] Paper: 'How Unlikely is a Doomsday' - Nick Bostrom, Max Tegmark - https://arxiv.org/abs/astro-ph/0512204 [3] Blog post: 'More Moral Than Us' - Adam Ford - https://www.scifuture.org/more-moral-... [4] Blog post: 'Al Alignment to Higher Values, Not Human Values' - Adam Ford - https://www.scifuture.org/ai-alignmen... [5] Nick Bostrom - Superintellingence Chapter 13 and also see post: https://www.scifuture.org/indirect-no... [6] Blog Post: 'AI, Don't Be a Cosmic Jerk' - https://www.scifuture.org/ai-dont-be-... #halloween #xrisk #ai #teirlist #superintelligence Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Buy me a coffee? https://buymeacoffee.com/tech101z Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P... Kind regards, Adam Ford Science, Technology & the Future - #SciFuture - http://scifuture.org