У нас вы можете посмотреть бесплатно Redact Sensitive Information from Your Prompts using LLMs или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
In this video, you'll see if an LLM can act as a pseudonymization engine, allowing you to redact sensitive information from prompts before they get sent to another AI interface. You'll apply learnings from the AI Purple Teaming mini course, like in-context examples and using more than one LLM at a time, in order to evaluate whether you can appropriately find and change prompts for your use case. Notebook is here: https://github.com/kjam/secure-and-pr... This is the last video (for now) in the AI Purple Teaming mini-course, but I'm excited to hear what else you might want to learn! Check out the full mini-course here: • Learn Hands-On AI Security & Red Teaming If the discriminator idea is new and you want to learn more, check out the generative adversarial network (GAN) description on wikipedia: https://en.wikipedia.org/wiki/Generat... . If you've enjoyed the videos thus far, I'd appreciate a subscribe, share or like and feel free to let me know what you've learned or would like to learn in the comments.