У нас вы можете посмотреть бесплатно How AI Infrastructure Is Evolving: Models, Flywheels, and the Open Ecosystem или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
The infrastructure of reasoning is evolving fast, and the builders behind it are rethinking everything from model evaluation to the physics of AI flywheels. In this 2025 IA Summit discussion, Bloomberg's Dina Bass moderated a conversation with Vijay Karunamurthy (formerly of Scale AI), Noah Smith (Allen Institute for AI), and Jonathan Cohen (Nvidia) on what’s next for model architectures, open ecosystems, and agent reliability. Vijay reflected on Scale’s work training data for reasoning models and described how “humanities last exam” benchmarks are exposing the limits of today’s models and pointing toward new approaches like parallel search and reasoning at scale. Noah shared how reinforcement learning with verifiable rewards and higher-quality data are unlocking the next wave of model improvement, emphasizing that “the data always matters more than the algorithm.” Jonathan explained why NVIDIA considers itself an “AI infrastructure company,” not just a chipmaker, and why building models like Nemotron openly is essential to advancing both scientific progress and practical innovation. Together, the group discussed the rise of flywheel systems (self-improving AI architectures that learn from usage and feedback) and how they’ll shape the next generation of intelligent agents. As Cohen put it, “The more you use it, the better it gets.” This conversation offers a rare inside look at how the researchers and engineers driving AI’s core infrastructure are thinking about the road ahead and what it will take to make reasoning systems trustworthy, scalable, and truly adaptive. Chapters: 00:00 Introduction & Panelist Intros 01:30 Barriers to Enterprise AI Adoption 03:00 Infrastructure & Software Gaps 04:25 Model Refinement & Fine-Tuning 06:30 Reinforcement Learning & Data Quality 08:00 NVIDIA’s Full Stack Approach 09:45 Open vs. Closed Model Debate 12:35 Global Competition & Open Science 14:00 Scientific Discovery & AI Creativity 15:00 Flywheels & Self-Healing Agents 17:00 Mixture of Experts & New Architectures 18:45 The Future: Robotics, Multimodality, and AI-Accelerated Engineering 21:00 Nvidia's AI Mandate