У нас вы можете посмотреть бесплатно Why you should care about AI interpretability - Mark Bissell, Goodfire AI или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
The goal of mechanistic interpretability is to reverse engineer neural networks. Having direct, programmable access to the internal neurons of models unlocks new ways for developers and users to interact with AI — from more precise steering to guardrails to novel user interfaces. While interpretability has long been an interesting research topic, it is now finding real-world use cases, making it an important tool for AI engineers. About Mark Bissell Mark Bissell is an applied researcher at Goodfire AI working on real-world applications for mechanistic interpretability. He recently joined Goodfire after 3 years at Palantir, where he worked on various U.S. healthcare initiatives including research projects with the NIH, vaccine distribution during the Covid pandemic (Operation Warp Speed), and AI-enabled hospital operations across many of the nation's leading health systems. Mark is passionate about translating frontier research into practical solutions. He believes that recent AI developments increase the importance broad skillsets, and that roles of the future will blur the lines between traditionally distinct categories such as engineer, researcher, inventor, designer, and entrepreneur. Recorded at the AI Engineer World's Fair in San Francisco. Stay up to date on our upcoming events and content by joining our newsletter here: https://www.ai.engineer/newsletter