У нас вы можете посмотреть бесплатно Watch this before using OpenClaw AI agent или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
AI agents like Clawdbot / “Jarvis” desktop bots are going viral because they feel like the next leap: not just chatting, but actually doing work across your apps (email, Slack, calendars, files). The upside is real, but so are the risks. In this video, I break down what can go wrong (data leaks, credential exposure, accidental destructive actions), and the practical safety system you should use before you let an agent touch anything important. Who this is for: Founders, operators, and builders experimenting with AI agents—especially if you’re connecting them to Gmail, Slack, Calendar, CRMs, databases, or internal tools. What you’ll learn: What Clawdbot-style agents actually do (and why they feel like “Jarvis”) The biggest failure mode: agents + access + vague instructions How a simple email can trick an agent into leaking data The hidden question nobody asks: where is the agent’s memory stored? A real “oops” scenario: agent actions that can delete files / wipe systems The safety framework: Read-only → Drafting → Execution How to sandbox agents using separate accounts + forwarding A rollout rule to reduce risk while still getting value fast Timestamps 0:00 - Clawdbot goes viral (the “Jarvis” moment) 0:26 - What Clawdbot actually is (an agent across your apps) 1:33 - What an AI agent is (vs a normal chatbot) 3:25 - Risk #1: Security — credentials & confidential data 3:43 - Risk #2: Prompt injection — email tricks the agent into leaking data 5:33 - Risk #3: Memory — where is it stored, who owns it, does it expire? 6:56 - Risk #4: Destructive actions — vague instructions lead to real damage (wiped Mac) 7:16 - Why the user is often the biggest risk (bad instructions + over-permissioning) 8:44 - The safety framework: Read-only → Drafting → Execution 11:29 - Safety setup: use secondary accounts + forwarding (reduce blast radius) 12:04 - Rollout rule: test in read-only/drafting before full execution 12:20 - Design memory intentionally (don’t outsource it by accident) 14:11 - Final thoughts About me I’m Vlad, founder of Green Republic, a London-based AI product studio. We’ve built 80+ AI products in 23 countries, and on this channel I break down practical frameworks, tools, and case studies to help founders turn AI into real business results (minus the hype). Let’s connect ➢ Subscribe for weekly AI breakdowns: / @vladrepublic ➢ Connect on LinkedIn: / vladrepublic ➢ Follow on Instagram: / vladrepublic ➢ Work with us: https://greenrepublic.ai/ ➢ Business inquiries: hey@greenrepublic.ai