У нас вы можете посмотреть бесплатно My 32GB PC Can't Handle It! My Experience Using Roo Code with Local LLMs - Ep013 или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Welcome to Episode 013 of "How I Use A.I. Tools"! In the last video, we were excited to use Roo Code to generate applications. This time, I’m giving you a real-world reality check on a common challenge: making Roo Code work with local models on moderate hardware. While many of us can’t afford top-of-the-line GPUs, we have powerful machines with 32GB of RAM and recent CPUs. The documentation suggests that even smaller 7B parameter models should work, but my experience was very different. I spent hours trying to get a single usable output, and I want to share my journey to save you the same frustration. In this video, I'll reveal my struggles and show you the exact models I tested, including: codellama:7b-code (Roo Code's recommended model) mistralai/Mistral-7B-Instruct-v0.1 llama3:8b-instruct-q5_1 qwen2.5-coer:3b phi4 gemma3 qwen3 Did any of them work? I'll show you the useless text from the one model that responded and the complete silence from the rest. This video is a crucial look at the gap between the promise of local AI coding and the reality for developers on consumer-grade hardware. If you're considering a similar setup, this video is a must-watch. Subscribe to the "How I Use A.I. Tools" series for more honest reviews and practical insights into the world of AI!