• ClipSaver
  • dtub.ru
ClipSaver
Русские видео
  • Смешные видео
  • Приколы
  • Обзоры
  • Новости
  • Тесты
  • Спорт
  • Любовь
  • Музыка
  • Разное
Сейчас в тренде
  • Фейгин лайф
  • Три кота
  • Самвел адамян
  • А4 ютуб
  • скачать бит
  • гитара с нуля
Иностранные видео
  • Funny Babies
  • Funny Sports
  • Funny Animals
  • Funny Pranks
  • Funny Magic
  • Funny Vines
  • Funny Virals
  • Funny K-Pop

NEURAL NETWORKS ARE WEIRD! - Neel Nanda (DeepMind) скачать в хорошем качестве

NEURAL NETWORKS ARE WEIRD! - Neel Nanda (DeepMind) 1 год назад

скачать видео

скачать mp3

скачать mp4

поделиться

телефон с камерой

телефон с видео

бесплатно

загрузить,

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
NEURAL NETWORKS ARE WEIRD! - Neel Nanda (DeepMind)
  • Поделиться ВК
  • Поделиться в ОК
  •  
  •  


Скачать видео с ютуб по ссылке или смотреть без блокировок на сайте: NEURAL NETWORKS ARE WEIRD! - Neel Nanda (DeepMind) в качестве 4k

У нас вы можете посмотреть бесплатно NEURAL NETWORKS ARE WEIRD! - Neel Nanda (DeepMind) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:

  • Информация по загрузке:

Скачать mp3 с ютуба отдельным файлом. Бесплатный рингтон NEURAL NETWORKS ARE WEIRD! - Neel Nanda (DeepMind) в формате MP3:


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса ClipSaver.ru



NEURAL NETWORKS ARE WEIRD! - Neel Nanda (DeepMind)

Neel Nanda, a senior research scientist at Google DeepMind, leads their mechanistic interpretability team. In this extensive interview, he discusses his work trying to understand how neural networks function internally. At just 26 years old, Nanda has quickly become a prominent voice in AI research after completing his pure mathematics degree at Cambridge in 2020. Nanda reckons that machine learning is unique because we create neural networks that can perform impressive tasks (like complex reasoning and software engineering) without understanding how they work internally. He compares this to having computer programs that can do things no human programmer knows how to write. His work focuses on "mechanistic interpretability" - attempting to uncover and understand the internal structures and algorithms that emerge within these networks. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ *** SHOWNOTES, TRANSCRIPT, ALL REFERENCES (DONT MISS!): https://www.dropbox.com/scl/fi/36dvtf... We riff on: How neural networks develop meaningful internal representations beyond simple pattern matching The effectiveness of chain-of-thought prompting and why it improves model performance The importance of hands-on coding over extensive paper reading for new researchers His journey from Cambridge to working with Chris Olah at Anthropic and eventually Google DeepMind The role of mechanistic interpretability in AI safety NEEL NANDA: https://www.neelnanda.io/ https://scholar.google.com/citations?... https://x.com/NeelNanda5 Interviewer - Tim Scarfe TOC: 1. Part 1: Introduction [00:00:00] 1.1 Introduction and Core Concepts Overview 2. Part 2: Outside Interview [00:06:45] 2.1 Mechanistic Interpretability Foundations 3. Part 3: Main Interview [00:32:52] 3.1 Mechanistic Interpretability 4. Neural Architecture and Circuits [01:00:31] 4.1 Biological Evolution Parallels [01:04:03] 4.2 Universal Circuit Patterns and Induction Heads [01:11:07] 4.3 Entity Detection and Knowledge Boundaries [01:14:26] 4.4 Mechanistic Interpretability and Activation Patching 5. Model Behavior Analysis [01:30:00] 5.1 Golden Gate Claude Experiment and Feature Amplification [01:33:27] 5.2 Model Personas and RLHF Behavior Modification [01:36:28] 5.3 Steering Vectors and Linear Representations [01:40:00] 5.4 Hallucinations and Model Uncertainty 6. Sparse Autoencoder Architecture [01:44:54] 6.1 Architecture and Mathematical Foundations [02:22:03] 6.2 Core Challenges and Solutions [02:32:04] 6.3 Advanced Activation Functions and Top-k Implementations [02:34:41] 6.4 Research Applications in Transformer Circuit Analysis 7. Feature Learning and Scaling [02:48:02] 7.1 Autoencoder Feature Learning and Width Parameters [03:02:46] 7.2 Scaling Laws and Training Stability [03:11:00] 7.3 Feature Identification and Bias Correction [03:19:52] 7.4 Training Dynamics Analysis Methods 8. Engineering Implementation [03:23:48] 8.1 Scale and Infrastructure Requirements [03:25:20] 8.2 Computational Requirements and Storage [03:35:22] 8.3 Chain-of-Thought Reasoning Implementation [03:37:15] 8.4 Latent Structure Inference in Language Models

Comments
  • This is why Deep Learning is really weird. 2 года назад
    This is why Deep Learning is really weird.
    Опубликовано: 2 года назад
  • The most complex model we actually understand 1 месяц назад
    The most complex model we actually understand
    Опубликовано: 1 месяц назад
  • Richard Sutton – Father of RL thinks LLMs are a dead end 4 месяца назад
    Richard Sutton – Father of RL thinks LLMs are a dead end
    Опубликовано: 4 месяца назад
  • Нил Нанда – Механистическая интерпретируемость: Вихревой тур 1 год назад
    Нил Нанда – Механистическая интерпретируемость: Вихревой тур
    Опубликовано: 1 год назад
  • Causal Mechanistic Interpretability (Stanford lecture 1) - Atticus Geiger 2 месяца назад
    Causal Mechanistic Interpretability (Stanford lecture 1) - Atticus Geiger
    Опубликовано: 2 месяца назад
  • Neural Networks Are Elastic Origami! [Prof. Randall Balestriero] 1 год назад
    Neural Networks Are Elastic Origami! [Prof. Randall Balestriero]
    Опубликовано: 1 год назад
  • We Can Monitor AI’s Thoughts… For Now | Google DeepMind's Neel Nanda 5 месяцев назад
    We Can Monitor AI’s Thoughts… For Now | Google DeepMind's Neel Nanda
    Опубликовано: 5 месяцев назад
  • Алгоритм памяти, вдохновлённый работой мозга 1 год назад
    Алгоритм памяти, вдохновлённый работой мозга
    Опубликовано: 1 год назад
  • Andrej Karpathy: Software Is Changing (Again) 7 месяцев назад
    Andrej Karpathy: Software Is Changing (Again)
    Опубликовано: 7 месяцев назад
  • WE MUST ADD STRUCTURE TO DEEP LEARNING BECAUSE... 1 год назад
    WE MUST ADD STRUCTURE TO DEEP LEARNING BECAUSE...
    Опубликовано: 1 год назад
  • The Mathematical Foundations of Intelligence [Professor Yi Ma] 1 месяц назад
    The Mathematical Foundations of Intelligence [Professor Yi Ma]
    Опубликовано: 1 месяц назад
  • Как ИИ научился думать 1 год назад
    Как ИИ научился думать
    Опубликовано: 1 год назад
  • Why Deep Learning Works Unreasonably Well [How Models Learn Part 3] 6 месяцев назад
    Why Deep Learning Works Unreasonably Well [How Models Learn Part 3]
    Опубликовано: 6 месяцев назад
  • The $285 Billion Crash Wall Street Won't Explain Honestly. Here's What Everyone Missed. 1 день назад
    The $285 Billion Crash Wall Street Won't Explain Honestly. Here's What Everyone Missed.
    Опубликовано: 1 день назад
  • Neel Nanda: Mechanistic Interpretability & Mathematics 2 года назад
    Neel Nanda: Mechanistic Interpretability & Mathematics
    Опубликовано: 2 года назад
  • Interpretability: Understanding how AI models think 5 месяцев назад
    Interpretability: Understanding how AI models think
    Опубликовано: 5 месяцев назад
  • Symbolic World Models - Top Piriyakulkij 2 недели назад
    Symbolic World Models - Top Piriyakulkij
    Опубликовано: 2 недели назад
  • Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs 1 год назад
    Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs
    Опубликовано: 1 год назад
  • The Dark Matter of AI [Mechanistic Interpretability] 1 год назад
    The Dark Matter of AI [Mechanistic Interpretability]
    Опубликовано: 1 год назад
  • Terence Tao at IMO 2024: AI and Mathematics 1 год назад
    Terence Tao at IMO 2024: AI and Mathematics
    Опубликовано: 1 год назад

Контактный email для правообладателей: u2beadvert@gmail.com © 2017 - 2026

Отказ от ответственности - Disclaimer Правообладателям - DMCA Условия использования сайта - TOS



Карта сайта 1 Карта сайта 2 Карта сайта 3 Карта сайта 4 Карта сайта 5