• ClipSaver
ClipSaver
Русские видео
  • Смешные видео
  • Приколы
  • Обзоры
  • Новости
  • Тесты
  • Спорт
  • Любовь
  • Музыка
  • Разное
Сейчас в тренде
  • Фейгин лайф
  • Три кота
  • Самвел адамян
  • А4 ютуб
  • скачать бит
  • гитара с нуля
Иностранные видео
  • Funny Babies
  • Funny Sports
  • Funny Animals
  • Funny Pranks
  • Funny Magic
  • Funny Vines
  • Funny Virals
  • Funny K-Pop

"You don't fine-tune your way to AGI" - Here's why. скачать в хорошем качестве

"You don't fine-tune your way to AGI" - Here's why. 2 months ago

video

sharing

camera phone

video phone

free

upload

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
  • Поделиться ВК
  • Поделиться в ОК
  •  
  •  


Скачать видео с ютуб по ссылке или смотреть без блокировок на сайте: "You don't fine-tune your way to AGI" - Here's why. в качестве 4k

У нас вы можете посмотреть бесплатно "You don't fine-tune your way to AGI" - Here's why. или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:

  • Информация по загрузке:

Скачать mp3 с ютуба отдельным файлом. Бесплатный рингтон "You don't fine-tune your way to AGI" - Here's why. в формате MP3:


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса ClipSaver.ru



"You don't fine-tune your way to AGI" - Here's why.

Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts human-level AI in knowledge work could be achieved within 18-36 months, outlining poolside's vision to dramatically increase software development productivity and accessibility. SPONSOR MESSAGES: *** Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/ *** Eiso Kant: https://x.com/eisokant https://poolside.ai/ TOC: 1. Foundation Models and AI Strategy [00:00:00] 1.1 Foundation Models and Timeline Predictions for AI Development [00:02:55] 1.2 Poolside AI's Corporate History and Strategic Vision [00:06:48] 1.3 Foundation Models vs Enterprise Customization Trade-offs 2. Reinforcement Learning and Model Economics [00:15:42] 2.1 Reinforcement Learning and Code Execution Feedback Approaches [00:22:06] 2.2 Model Economics and Experimental Optimization 3. Enterprise AI Implementation [00:25:20] 3.1 Poolside's Enterprise Deployment Strategy and Infrastructure [00:26:00] 3.2 Enterprise-First Business Model and Market Focus [00:27:05] 3.3 Foundation Models and AGI Development Approach [00:29:24] 3.4 DeepSeek Case Study and Infrastructure Requirements 4. LLM Architecture and Performance [00:30:15] 4.1 Distributed Training and Hardware Architecture Optimization [00:33:01] 4.2 Model Scaling Strategies and Chinchilla Optimality Trade-offs [00:36:04] 4.3 Emergent Reasoning and Model Architecture Comparisons [00:43:26] 4.4 Balancing Creativity and Determinism in AI Models [00:50:01] 4.5 AI-Assisted Software Development Evolution 5. AI Systems Engineering and Scalability [00:58:31] 5.1 Enterprise AI Productivity and Implementation Challenges [00:58:40] 5.2 Low-Code Solutions and Enterprise Hiring Trends [01:01:25] 5.3 Distributed Systems and Engineering Complexity [01:01:50] 5.4 GenAI Architecture and Scalability Patterns [01:01:55] 5.5 Scaling Limitations and Architectural Patterns in AI Code Generation 6. AI Safety and Future Capabilities [01:06:23] 6.1 Semantic Understanding and Language Model Reasoning Approaches [01:12:42] 6.2 Model Interpretability and Safety Considerations in AI Systems [01:16:27] 6.3 AI vs Human Capabilities in Software Development [01:33:45] 6.4 Enterprise Deployment and Security Architecture TRANSCRIPT: https://www.dropbox.com/scl/fi/szepl6... CORE REFS (see shownotes for URLs/more refs): [00:15:45] Research demonstrating how training on model-generated content leads to distribution collapse in AI models, Ilia Shumailov et al. (Key finding on synthetic data risk) [00:20:05] Foundational paper introducing Word2Vec for computing word vector representations, Tomas Mikolov et al. (Seminal NLP technique) [00:22:15] OpenAI O3 model's breakthrough performance on ARC Prize Challenge, OpenAI (Significant AI reasoning benchmark achievement) [00:22:40] Seminal paper proposing a formal definition of intelligence as skill-acquisition efficiency, François Chollet (Influential AI definition/philosophy) [00:30:30] Technical documentation of DeepSeek's V3 model architecture and capabilities, DeepSeek AI (Details on a major new model) [00:34:30] Foundational paper establishing optimal scaling laws for LLM training, Jordan Hoffmann et al. (Key paper on LLM scaling) [00:45:45] Seminal essay arguing that scaling computation consistently trumps human-engineered solutions in AI, Richard S. Sutton (Influential "Bitter Lesson" perspective) [00:46:10] Benchmark challenge testing AI systems' abstract reasoning capabilities, François Chollet (Important reasoning benchmark - ARC) [00:49:55] Technical details of AlphaGo's search strategy and exploration-exploitation balance, David Silver et al. (Landmark AI achievement details) [01:10:15] Novel architecture combining linear attention with selective state spaces for efficient sequence modeling, Albert Gu et al. (Mamba/SSM - Important new architecture) [01:14:25] Foundational work on neural network interpretability through circuit analysis, Chris Olah (Key interpretability research) [01:19:10] Karpathy's vision of neural networks replacing traditional programming paradigms, Andrej Karpathy (Influential "Software 2.0" concept) [01:30:25] DeepMind's breakthrough in interactive environment generation using foundation world models, DeepMind Research Team (Genie 2 - Cutting-edge world models)

Comments
  • Can AI Improve Itself? 3 months ago
    Can AI Improve Itself?
    Опубликовано: 3 months ago
    24983
  • The mind behind Linux | Linus Torvalds | TED 9 years ago
    The mind behind Linux | Linus Torvalds | TED
    Опубликовано: 9 years ago
    6114237
  • NVIDIA CEO Jensen Huang's Vision for the Future 4 months ago
    NVIDIA CEO Jensen Huang's Vision for the Future
    Опубликовано: 4 months ago
    2962820
  • Yann LeCun 2 months ago
    Yann LeCun "Mathematical Obstacles on the Way to Human-Level AI"
    Опубликовано: 2 months ago
    96519
  • The AI Revolution Is Underhyped | Eric Schmidt | TED 2 weeks ago
    The AI Revolution Is Underhyped | Eric Schmidt | TED
    Опубликовано: 2 weeks ago
    1376184
  • The Most Useful Thing AI Has Ever Done 3 months ago
    The Most Useful Thing AI Has Ever Done
    Опубликовано: 3 months ago
    8791424
  • François Chollet on OpenAI o-models and ARC 4 months ago
    François Chollet on OpenAI o-models and ARC
    Опубликовано: 4 months ago
    84601
  • What do tech pioneers think about the AI revolution? - BBC World Service 9 months ago
    What do tech pioneers think about the AI revolution? - BBC World Service
    Опубликовано: 9 months ago
    1651927
  • Large Language Models (LLMs) - Everything You NEED To Know 1 year ago
    Large Language Models (LLMs) - Everything You NEED To Know
    Опубликовано: 1 year ago
    249154
  • Что не так с Западом? Мигранты, левые, цензура / Eng subs 12 days ago
    Что не так с Западом? Мигранты, левые, цензура / Eng subs
    Опубликовано: 12 days ago
    2264095

Контактный email для правообладателей: [email protected] © 2017 - 2025

Отказ от ответственности - Disclaimer Правообладателям - DMCA Условия использования сайта - TOS



Карта сайта 1 Карта сайта 2 Карта сайта 3 Карта сайта 4 Карта сайта 5