У нас вы можете посмотреть бесплатно Podcast Episode 3 - Are LLMs the next generation of compilers? или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
In this episode, we explore a wild experiment: using AI to write raw x86_64 assembly — and building an entire toolchain around it. From a self-hosted assembler to a custom `asm` CLI with build, test, lint, format, benchmark, profiling, and fuzzing, we dive into what happens when you give an LLM tight feedback loops and let it operate at the lowest level of the machine. We discuss: Treating LLMs as compilers Designing constraints (linting, static analysis, arenas, safety guarantees) Replacing NASM with a self-hosting assembler Building an async runtime (io_uring), HTTP server, and web framework in pure assembly Why iteration speed can outpace higher-level languages Opus 4.6 vs 4.5, Codex 5.3, and how model behavior has changed Why writing assembly with AI feels like programming at the speed of thought We also touch on EQBench, where Opus 4.6 currently leads in writing quality: 👉 https://eqbench.com If the future of software is natural language → machine instructions, this might be what it looks like.