У нас вы можете посмотреть бесплатно AILuminate v1.0 Benchmark Launch Event - In Full или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
MLCommons today released AILuminate, a first-of-its-kind safety test for large language models (LLMs). The v1.0 benchmark – which provides a series of safety grades for the most widely-used LLMs – is the first AI safety benchmark designed collaboratively by AI researchers and industry experts. It builds on MLCommons’ track record of producing trusted AI performance benchmarks, and offers a scientific, independent analysis of LLM risk that can be immediately incorporated into company decision-making. 10:05 - 10:30 AM Opening Remarks on the Future of AI Safety: Peter Mattson, MLCommons President 10:30 - 11:20 AM Deep Dive Lightning Talks: Talk 1: Assessment Standard: Eleanora Presani, Meta Lightning Talk 2: Prompts and Infrastructure: Heather Frase, Veritech Lightning Talk 3: Evaluator Mechanism: Shaona Ghosh, NVIDIA Lightning Talk 4: Use Cases: Marisa Boston, Reins AI Lightning Talk 5: Integrity: Sean McGregor, UL 11:25 - 12:15 PM Panel Discussion Moderated by: Peter Mattson, MLCommons President Panelist 1: Nouha Dziri, Research Scientist at Allen Institute for AI Panelist 2: Ion Stoica, Professor at the University of California Berkeley Panelist 3: April Chen, Director of Responsible AI Measurement at Microsoft Panelist 4: Wan Sie Lee, Director of Artificial Intelligence (AI) and Data Innovation, IMDA 12:15 - 12:20 PM Closing Remarks