У нас вы можете посмотреть бесплатно Google Gemini Changed the Rules: Are Your API Keys Exposed? или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
For years, many Google API keys were treated as “public” project identifiers embedded in client-side code and protected mainly through referrer and API restrictions. But a recent discovery suggests Gemini changes that risk model: researchers found nearly 3,000 publicly exposed Google API keys that were still “live” and could be used to interact with Gemini endpoints, creating a new path to unauthorized usage, quota exhaustion, and potentially costly API charges. In this episode of Cyberside Chats, we unpack what “changed the rules” actually means, why this is a classic cloud governance problem (old assumptions meeting new capabilities), and what to check right now. The bottom line: AI features are quietly expanding the blast radius of credentials you never intended to treat as secrets. Key Takeaways 1. Audit legacy API keys before and after enabling AI services - Inventory every API key across your cloud projects and confirm it is still required, properly scoped, and has a clear owner. Treat AI enablement as a formal trigger event to reassess any previously published or embedded keys in that same project. 2. Treat API keys as sensitive credentials in the AI era - Even if a vendor once described a key as “not a secret,” AI endpoints materially increase financial and potential data exposure risk. Apply rotation, monitoring, strict quotas, and real-time billing alerts accordingly. 3. Enforce least privilege at the API level - Referrer or IP restrictions alone are insufficient. Every key should be explicitly limited to only the APIs it requires. “Allow all APIs” should not exist in production. 4. Isolate AI development from production application projects - Avoid enabling AI services in long-lived projects that contain public-facing keys. Use separate projects, accounts, or subscriptions for AI experimentation and production workloads to reduce blast radius and cost exposure. 5. Update third-party risk management to include AI-driven credential and cost risk - Ask vendors how API keys are scoped, restricted, rotated, and monitored especially for AI services. Confirm that AI environments are isolated from production systems and that abnormal AI usage or billing spikes are actively monitored. Resources: 1. Google API Keys Weren’t Secrets. But then Gemini Changed the Rules (Truffle Security) https://trufflesecurity.com/blog/goog... 2. Previously harmless Google API keys now expose Gemini AI data (BleepingComputer) https://www.bleepingcomputer.com/news... 3. DEF CON 31 – “Private Keys in Public Places” (Tom Pohl) (YouTube) • DEF CON 31 - Private Keys in Public Places... 4. Exposed Secrets, Broken Trust: What the DOGE API Key Leak Teaches Us About Software Security (LMG Security) https://www.lmgsecurity.com/exposed-s... 5. Google Cloud docs: API keys overview & best practices (Google) https://docs.cloud.google.com/api-key...