У нас вы можете посмотреть бесплатно How I Make AI Coding Agents Prove They Followed the Rules Before Every Commit или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
We know that coding agents do better with better context, but providing context isn't enough. What if we could also enforce that they follow the guidelines in the context? Enter: gjalla. A way to make your agents attest that they followed the rules, and report any changes that don't match the system's source of truth (architecture, data flows, etc.). In this demo I'll show you... How the gjalla CLI integrates with your coding agent workflow The precommit hook in action; you'll watch the agent have its commit rejected until it promises that it adhered to the rules The agent self-reporting on rule and constraint violations The agent self-reporting on architecture-related changes Context-informed code review is important too, but it's even better because it provides feedback to the agent before it's allowed to push code. Works with Claude Code, Cursor, Codex, Aider, and any agent that commits to Git. Links: Track the attributes that matter with gjalla: https://gjalla.io Excalidraw open source repository: https://github.com/excalidraw/excalidraw Excalidraw gjalla documentation: https://gjalla.io/demo/excalidraw