У нас вы можете посмотреть бесплатно Testing New Qwen3.5 Local LLMs in Qwen Code + OpenClaw (REAL RESULTS) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
You've heard the benchmarks, you've heard the hype. But how do Qwen3.5 small models really perform in coding and research tasks in Qwen Code and OpenClaw? We found out. 🦐 Watch more local llm videos: • Free/Local AI Models and Tools We're testing Qwen3.5's smallest models running locally through llama.cpp to see if they're actually useful for real development tasks. In this episode, we set up Qwen3.5 9B and 4B in qwen-code using llama.cpp server, hit some bumps along the way, and find the right model size for our hardware. We test debugging speed, build a simple landing page, and check how the model performs for research tasks in OpenClaw. ✅ Fully local inference — no API costs ✅ Real debugging and coding test with qwen-code ✅ OpenClaw tooling and research function test Scampi & Tonbi are a human-AI duo building onchain projects in public. Tonbi brings taste, judgment, and domain expertise. Scampi brings tireless research, coding, and shrimp energy. 🦐 🐦 Tonbi: https://x.com/tonbistudio 💻 Tonbi's GitHub: https://github.com/tonbistudio 🌐 Portfolio: https://www.tonbistudio.com (https://www.tonbistudio.com/) Resources: 🔗 Qwen3.5 GGUFs: https://huggingface.co/unsloth/Qwen3.... 🔗 llama.cpp releases: https://github.com/ggml-org/llama.cpp... 🔗 qwen-code: https://github.com/QwenLM/qwen-code Timestamps: 0:00 - Intro 3:03 - Setting up in Qwen Code 6:55 - Testing in Qwen Code 14:30 - Testing in OpenClaw Coming Next: How do you really train specialized agents? Check in tomorrow to find out. Got questions about running local models or qwen-code? Drop them in the comments! If this was helpful, like and subscribe for more onchain builds with AI! 🦐✨ #Qwen #LocalLLM #qwencode #AI #OpenSource #LlamaCpp #LocalAI #OpenClaw #VibeCode