У нас вы можете посмотреть бесплатно Evals, reducing hallucinations, & AI-native development или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
The episode opens with Amy Heineike outlining Tessl's core mission: building documentation registries optimized for coding agents. Daniel Jones notes the pervasive frustration of API hallucinations, where models invent idealized but non-existent methods that waste developer cycles. Amy explains that models often struggle with APIs too new or too old for their training sets, creating a critical need for external grounding. The duo laments lost efficiency when agents trawl through bloated web pages or unoptimized node modules. Amy introduces the Registry as a version-locked context provider that prevents agents from polluting context windows with raw text. Using an MCP server, agents access summary documentation, staying grounded without token-heavy web crawls. The discussion pivots to verification methodology. Amy likens the shift from unit testing to evaluations as moving from hard logic to biological science. In traditional engineering, a unit test fix remains fixed, but in agentic systems, success is measured across a basket of scenarios. This requires developers to think like statisticians, examining success averages and variance rather than binary pass-fail states. The episode explores the paradox of detail: providing more task instructions can cause agents to ignore broader system-level steering. Amy shares research showing that as task prescriptiveness increases, agents weigh local context over global rules. The conversation deepens around non-deterministic high-performing systems. They discuss the Ralph Wiggum loop and Steve Yegge's Gastown framework, illustrating how agentic head-banging against errors can lead to superior, anti-fragile outcomes. Daniel introduces the Van Halen Brown M&M feedback loop as a psychological steering mechanism, where developers can use emoji-triggers to verify if a model respects the context window. The dialogue concludes with forward-looking organizational analysis. As AI capabilities coalesce, rigid boxes of product, design, and engineering begin to merge. Amy and Daniel envision the rise of the Product Engineer, a role focused on intentionality and outcomes rather than syntax. They argue that defining what a good outcome looks like becomes the primary lever of control. Amy encourages embracing the chaos of transition, suggesting stability is found in accepting variability rather than fighting for perfect determinism. Key Themes Explored: • Machine-Optimized Contextual Grounding: Tessl provides unpolluted, machine-ready registries that prevent token-heavy hallucinations in cutting-edge or legacy APIs. • Probabilistic Verification: Engineering is shifting from binary unit tests toward statistical evaluation modeling, treating systems as biological entities requiring constant observation. • The Paradox of Detailed Steering: Hyper-prescriptive prompts often cause loss of global instruction adherence. Architects must balance task detail with system steering. • Anti-Fragility via Non-Determinism: Embracing non-deterministic loops allows systems to escape local maxima and discover stable solutions through learning from failures. • Outcome-Focused Engineering: AI is merging product management and development into a single outcome-oriented discipline focused on defining intentionality. • Multi-Pass Agentic Architectures: Breaking logic, security, and performance into specialized sequential passes prevents cognitive overload and improves reliability.