• ClipSaver
ClipSaver
Русские видео
  • Смешные видео
  • Приколы
  • Обзоры
  • Новости
  • Тесты
  • Спорт
  • Любовь
  • Музыка
  • Разное
Сейчас в тренде
  • Фейгин лайф
  • Три кота
  • Самвел адамян
  • А4 ютуб
  • скачать бит
  • гитара с нуля
Иностранные видео
  • Funny Babies
  • Funny Sports
  • Funny Animals
  • Funny Pranks
  • Funny Magic
  • Funny Vines
  • Funny Virals
  • Funny K-Pop

YUDKOWSKY + WOLFRAM ON AI RISK. скачать в хорошем качестве

YUDKOWSKY + WOLFRAM ON AI RISK. 6 months ago

video

sharing

camera phone

video phone

free

upload

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
YUDKOWSKY + WOLFRAM ON AI RISK.
  • Поделиться ВК
  • Поделиться в ОК
  •  
  •  


Скачать видео с ютуб по ссылке или смотреть без блокировок на сайте: YUDKOWSKY + WOLFRAM ON AI RISK. в качестве 4k

У нас вы можете посмотреть бесплатно YUDKOWSKY + WOLFRAM ON AI RISK. или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:

  • Информация по загрузке:

Скачать mp3 с ютуба отдельным файлом. Бесплатный рингтон YUDKOWSKY + WOLFRAM ON AI RISK. в формате MP3:


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса ClipSaver.ru



YUDKOWSKY + WOLFRAM ON AI RISK.

Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑ tial risks. They traversed fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence. The discourse centered on Yudkowsky’s argument that advanced AI systems pose an existential threat to humanity, primarily due to the challenge of alignment and the potential for emergent goals that diverge from human values. Wolfram, while acknowledging potential risks, approached the topic from a his signature measured perspective, emphasizing the importance of understanding computational systems’ fundamental nature and questioning whether AI systems would necessarily develop the kind of goal‑directed behavior Yudkowsky fears. SHOWNOTES (transcription, references, summary, best quotes etc): https://www.dropbox.com/scl/fi/3st8dt... *** MLST IS SPONSORED BY TUFA AI LABS! The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/ *** https://en.wikipedia.org/wiki/Eliezer... https://en.wikipedia.org/wiki/Stephen... TOC: 1. Foundational AI Concepts and Risks [00:00:00] 1.1 AI Optimization and System Capabilities Debate [00:06:46] 1.2 Computational Irreducibility and Intelligence Limitations [00:20:09] 1.3 Existential Risk and Species Succession [00:23:28] 1.4 Consciousness and Value Preservation in AI Systems 2. Ethics and Philosophy in AI [00:33:24] 2.1 Moral Value of Human Consciousness vs. Computation [00:36:30] 2.2 Ethics and Moral Philosophy Debate [00:39:58] 2.3 Existential Risks and Digital Immortality [00:43:30] 2.4 Consciousness and Personal Identity in Brain Emulation 3. Truth and Logic in AI Systems [00:54:39] 3.1 AI Persuasion Ethics and Truth [01:01:48] 3.2 Mathematical Truth and Logic in AI Systems [01:11:29] 3.3 Universal Truth vs Personal Interpretation in Ethics and Mathematics [01:14:43] 3.4 Quantum Mechanics and Fundamental Reality Debate 4. AI Capabilities and Constraints [01:21:21] 4.1 AI Perception and Physical Laws [01:28:33] 4.2 AI Capabilities and Computational Constraints [01:34:59] 4.3 AI Motivation and Anthropomorphization Debate [01:38:09] 4.4 Prediction vs Agency in AI Systems 5. AI System Architecture and Behavior [01:44:47] 5.1 Computational Irreducibility and Probabilistic Prediction [01:48:10] 5.2 Teleological vs Mechanistic Explanations of AI Behavior [02:09:41] 5.3 Machine Learning as Assembly of Computational Components [02:29:52] 5.4 AI Safety and Predictability in Complex Systems 6. Goal Optimization and Alignment [02:50:30] 6.1 Goal Specification and Optimization Challenges in AI Systems [02:58:31] 6.2 Intelligence, Computation, and Goal-Directed Behavior [03:02:18] 6.3 Optimization Goals and Human Existential Risk [03:08:49] 6.4 Emergent Goals and AI Alignment Challenges 7. AI Evolution and Risk Assessment [03:19:44] 7.1 Inner Optimization and Mesa-Optimization Theory [03:34:00] 7.2 Dynamic AI Goals and Extinction Risk Debate [03:56:05] 7.3 AI Risk and Biological System Analogies [04:09:37] 7.4 Expert Risk Assessments and Optimism vs Reality 8. Future Implications and Economics [04:13:01] 8.1 Economic and Proliferation Considerations

Comments
  • Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6] 4 months ago
    Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]
    Опубликовано: 4 months ago
    20187
  • What is Time? Stephen Wolfram’s Groundbreaking New Theory 6 months ago
    What is Time? Stephen Wolfram’s Groundbreaking New Theory
    Опубликовано: 6 months ago
    395663
  • Ядерная война: сценарий. Как технически произойдет апокалипсис 4 days ago
    Ядерная война: сценарий. Как технически произойдет апокалипсис
    Опубликовано: 4 days ago
    1562156
  • The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment 4 years ago
    The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment
    Опубликовано: 4 years ago
    248806
  • Michael Levin - Why Intelligence Isn't Limited To Brains. 7 months ago
    Michael Levin - Why Intelligence Isn't Limited To Brains.
    Опубликовано: 7 months ago
    89553
  • Why the rush? - lo-fi beats for work/study / cat jazz 1 month ago
    Why the rush? - lo-fi beats for work/study / cat jazz
    Опубликовано: 1 month ago
    428736
  • Chill Music — Deep Focus & Inspiring Mix 9 months ago
    Chill Music — Deep Focus & Inspiring Mix
    Опубликовано: 9 months ago
    2304247
  • This Theory of Everything Could Actually Work: Wolfram’s Hypergraphs 7 months ago
    This Theory of Everything Could Actually Work: Wolfram’s Hypergraphs
    Опубликовано: 7 months ago
    979110
  • Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity | Lex Fridman Podcast #452 6 months ago
    Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity | Lex Fridman Podcast #452
    Опубликовано: 6 months ago
    1502752
  • Eliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start 8 years ago
    Eliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start
    Опубликовано: 8 years ago
    119247

Контактный email для правообладателей: [email protected] © 2017 - 2025

Отказ от ответственности - Disclaimer Правообладателям - DMCA Условия использования сайта - TOS



Карта сайта 1 Карта сайта 2 Карта сайта 3 Карта сайта 4 Карта сайта 5