У нас вы можете посмотреть бесплатно Alan Chan and Max Kaufmann–Model Evaluations, Timelines, Coordination или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Max Kaufmann and Alan Chan discuss the evaluation of large language models, AI Governance and more generally the impact of the deployment of foundational models. Max is currently a Research Assistant to Owain Evans, mainly thinking about (and fixing) issues that might arise as we scale up our current ML systems, but also interested in issues arising from multi-agent failures and situational awareness. Alan is PhD student at Mila advised by Nicolas Le Roux, with a strong interest in AI Safety, AI Governance and coordination. He has also recently been working with David Krueger and helped me with some of the interviews that have been published recently (ML Street talk and Christoph Schuhmann). Disclaimer: this discussion is much more casual than the rest of the conversations in this podcast. This was completely impromptu: I just thought it would be interesting to have Max and Alan discuss model evaluations (also called “evals” for short), since they are both interested in the topic. Transcript: https://theinsideview.ai/alan_and_max Alan: / _achan96_ Max: / max_a_kaufmann OUTLINE 00:00 Intro 00:46 Balancing Evaluations and Governance in AI Safety 10:30 Addressing Existential Risk Perception and AI Harms 21:20 AI Safety Concerns and Capability Development 32:34 AGI Timelines and Alignment Challenges 45:11 Building Bridges Between AI Safety and Fairness Communities 53:26 Understanding Risks and Cooperation in AI Safety 1:01:24 Alan's Journey into AI Safety Research