У нас вы можете посмотреть бесплатно Case IV: OpenClaw on VPS, LLM on Local PC (Reverse SSH Deep Dive) - OverExplained или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
In this video, I demonstrate Case IV of my OpenClaw architecture — a hybrid AI setup where: • OpenClaw is running on a cloud VPS (RunPod GPU instance) • LLMs are running locally on my Windows machine using Ollama • Both systems communicate seamlessly using a reverse SSH tunnel You will need a GPU GPU Runpod: https://get.runpod.io/pe48 Json edits: https://docs.google.com/document/d/12... This setup allows a cloud-hosted application to securely access and use LLMs running on your local machine, without exposing your local system directly to the internet. 🔍 What’s covered in this video: • Why and when you’d want to run LLMs locally but orchestration in the cloud • How reverse SSH tunneling works (conceptually and practically) • Setting up reverse SSH from Windows → RunPod • Connecting OpenClaw (on VPS) to Ollama (on localhost) • Port forwarding, security considerations, and common pitfalls • Real-world use cases for hybrid cloud–local AI systems 🧠 Why this matters: • Keep model inference local (privacy, cost, experimentation) • Still leverage cloud GPUs, orchestration, and uptime • No public IP or firewall changes needed on your local machine • Ideal for devs building agent systems, copilots, or AI backends This pattern is extremely powerful for anyone working with: • Local LLMs (Ollama, LM Studio, etc.) • Cloud GPUs (RunPod, VPS, EC2, bare metal) • Secure networking and AI infrastructure design If you’re building serious AI systems, this is a setup worth understanding. CHANNEL LINKS: ☕ Buy me a coffee: https://ko-fi.com/promptengineer 📱 Support me on Patreon: / promptengineer975 📞 Get on a Call with me at Calendly: https://calendly.com/prompt-engineer4... 💀 GitHub Profile: https://github.com/PromptEngineer48 🔖 Twitter Profile: / prompt48 Hastag: #reversessh, #sshtunneling, #runpod, #ollama, #localllm, #cloudgpu, #hybridai, #openclaw, #llminfrastructure, #aiarchitecture, #devopsforai, #selfhostedllm, #privatellm, #windowsollama, #vpsgpu, #aiagents, #securetunneling, #portforwarding, #aibackend Time Stamps: 0:00 Intro 1:32 Runpod 2:22 Setting up VPS 4:04 Update and Upgrade 4:25 Ollama Installation 4:50 Install OpenClaw 6:02 Telegram Integration 7:54 Downloading Models 9:50 Explaining Curl 10:30 set 0 10:57 Reverse Tunnel SSH 13:10 nano json edits 15:34 Demo 17:10 Conclusion