• ClipSaver
ClipSaver
Русские видео
  • Смешные видео
  • Приколы
  • Обзоры
  • Новости
  • Тесты
  • Спорт
  • Любовь
  • Музыка
  • Разное
Сейчас в тренде
  • Фейгин лайф
  • Три кота
  • Самвел адамян
  • А4 ютуб
  • скачать бит
  • гитара с нуля
Иностранные видео
  • Funny Babies
  • Funny Sports
  • Funny Animals
  • Funny Pranks
  • Funny Magic
  • Funny Vines
  • Funny Virals
  • Funny K-Pop

🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01 скачать в хорошем качестве

🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01 11 months ago

responsbile AI

AI ethics

fairness

AI

podcast

NIST

generative AI profile

user-centered design

education

outreach

LLMs

Generative AI

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01
  • Поделиться ВК
  • Поделиться в ОК
  •  
  •  


Скачать видео с ютуб по ссылке или смотреть без блокировок на сайте: 🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01 в качестве 4k

У нас вы можете посмотреть бесплатно 🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01 или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:

  • Информация по загрузке:

Скачать mp3 с ютуба отдельным файлом. Бесплатный рингтон 🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01 в формате MP3:


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса ClipSaver.ru



🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01

In this episode we discuss AI Risk Management Frameworks (RMFs) focusing on NIST's Generative AI profile: ✅ Demystify misunderstandings about AI RMFs: what they are for, what they are not for ✅ Unpack challenges of evaluating AI frameworks ✅ Inert knowledge in frameworks need to be activated through processes and user-centered design to bridge the gap between theory and practice. What can you do? 🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction. 🎙️Who are your hosts and why should you even bother to listen? Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI. Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm. All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives. Follow us for more Responsible AI and the occasional sh*tposting: Upol:   / upolehsan   Shea:   / shea-brown-26050465   CHAPTERS: 00:00 - What will we discuss in this episode? 01:22 - What are AI Risk Management Frameworks 03:03 - Understanding NIST's Generative AI Profile 04:00 - What's the difference between NIST's AI RMF vs GenAI Profile? 08:38 - What are other equivalent AI RMFs? 10:00- How we engage with AI Risk Management Frameworks? 14:28 - Evaluating the Effectiveness of Frameworks 17:20 - Challenges of Framework Evaluation 21:05 - Evaluation Metrics are NOT always quantitative 22:32 - Frameworks are inert-- they need to be activated 24:40 - The Gap of Implementing a Framework in Practice 26:45 - User-centered Design solutions to address the gap 28:36 - Consensus-based framework creation is a chaotic process 30:40 - Tip for small businesses to amplify profile in RAI 31:30 - Takeaways #ResponsibleAI #ExplainableAI #podcasts #aiethics

Comments
  • NIST AI Risk Management Framework & Generative AI Profile | Lunchtime BABLing 36 1 year ago
    NIST AI Risk Management Framework & Generative AI Profile | Lunchtime BABLing 36
    Опубликовано: 1 year ago
    3258
  • Best Practices for Generative AI Risk Management and Prevention 1 year ago
    Best Practices for Generative AI Risk Management and Prevention
    Опубликовано: 1 year ago
    3873
  • Transformers (how LLMs work) explained visually | DL5 1 year ago
    Transformers (how LLMs work) explained visually | DL5
    Опубликовано: 1 year ago
    6297697
  • NIST's Responsibilities Under Executive Order 14110 On Safe, Secure and Trustworthy AI 1 year ago
    NIST's Responsibilities Under Executive Order 14110 On Safe, Secure and Trustworthy AI
    Опубликовано: 1 year ago
    1669
  • Start Writing Prompts Like a Pro | Google Prompting Essentials 6 months ago
    Start Writing Prompts Like a Pro | Google Prompting Essentials
    Опубликовано: 6 months ago
    46184
  • Using the NIST AI Risk Management Framework // Applied AI Meetup October 2023 1 year ago
    Using the NIST AI Risk Management Framework // Applied AI Meetup October 2023
    Опубликовано: 1 year ago
    16483
  • AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894 10 months ago
    AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894
    Опубликовано: 10 months ago
    2073
  • 🌶️ Cutting through the Responsible AI hype: how to enter the field | irResponsible AI EP2S01 1 year ago
    🌶️ Cutting through the Responsible AI hype: how to enter the field | irResponsible AI EP2S01
    Опубликовано: 1 year ago
    572
  • Power BI Tutorial for Beginners 1 year ago
    Power BI Tutorial for Beginners
    Опубликовано: 1 year ago
    3107991
  • Что не так с Западом? Мигранты, левые, цензура / вДудь 2 days ago
    Что не так с Западом? Мигранты, левые, цензура / вДудь
    Опубликовано: 2 days ago
    1463851

Контактный email для правообладателей: [email protected] © 2017 - 2025

Отказ от ответственности - Disclaimer Правообладателям - DMCA Условия использования сайта - TOS