У нас вы можете посмотреть бесплатно Jennifer Sand & Brandy Pielech Beyond Tests What to Verify in AI Generated Code или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Agent-based coding introduces new quality risks to teams that rely primarily on traditional testing approaches. Because agents default to "best guesses" when provided insufficient or underspecific context, these gaps—if left unchecked—can result in production issues related to performance, stability, or unexpected edge cases. However, when teams provide clear invariants and non-functional requirements coupled with review cycles that ensure they're met, agents can produce significantly higher quality code, reducing downstream maintenance costs. This talk presents an invariant-driven framework for deciding what to verify and where those checks belong in your pipeline. We'll introduce a simple invariant taxonomy that delivers immediate benefits through verification. The taxonomy is based on scope (universal across any repo, system/architecture-specific, or feature-level) and type of check (data contract, business logic, or performance/SLA), coupled with the target remediation (advisory only, block merge, or rewrite). We'll conclude with a before-and-after demo leveraging Tessl's Specification Registry that demonstrates the benefits of incorporating invariants into your agentic coding workflows. Attendees will leave with a practical checklist they can apply immediately.